Archives


JOB SNIJDERS: Ladies and gentleman it is two o'clock on this first date, I hearby declare or Database Working Group in progress.

Welcome everybody. It is good to see that more and more people are starting to come to this session. So, it either means we're becoming more and more popular or more and more controversial, one of the two.



So back to our agenda, I will first involuntarily appoint a scribe, Nigel, thank you for your services.

And now I would like to hand over the microphone to Nigel for a review of our action items and finalising the agenda.

NIGEL TITLEY: Action point roundup, I do this every time. It's just a house‑keeping really.

Okay, we have I think one action point still hanging over from RIPE 67, and that's really with the Anti‑Abuse at the moment, that's to check that the Anti‑Abuse Working Group has specified what should be done with mail sent to the abuse contact. They are currently deliberating on this according to Brian. Although Brian appears to be disputing this.

BRIAN NISBET: Yes, and I have to apologise to Rudiger, I meant to find you earlier in the week and apologise, we have not yet ‑‑ we actually have the information. I just haven't had a chance to put it into a reasonable format and you know, we have a good process for that. I will ‑‑ we will absolutely have it done for Copenhagen, I hope we'll have it done before the end of the year, it might be ‑‑

NIGEL TITLEY: I'd like to point out he hasn't sat on this for two years, I think it's half a year. Not too bad.

Okay. 69. These two are a year old. RIPE NCC produce a self contained document scribing migration of the changed attribute to the last modified attributes. I think that's pretty well done. Yes, I think we can discharge this one I think.

69.2, again RIPE NCC. Come up with some straw man proposals for displaying history for objects where available. Remind me on whether this one is done.

Is that a yes or...

No. Okay, I thought the idea was that you came up with a straw man proposal which is suggesting the content and then we said whether or not that was what we wanted. That's what a straw man proposal usually is. Okay, so that one is ongoing.

One discharged and one ongoing.

The third one RIPE NCC to examine a report on possible solutions to improving geolocation data.

Ongoing.

JOB SNIJDERS: I have seen zero discussion on this action item in recent months. So, I propose that either a volunteer steps forward and says I will work on an idea or proposal to improve geolocation information or we scratch the action item. Because if nobody is volunteering to do work, then ‑‑

NIGEL TITLEY: I am inclined to agree with you, geolocation has been batting around for years and years in this group and nothing has ever really come of it.

JOB SNIJDERS: Exactly.

NIGEL TITLEY: I suspect that nobody has really owned it or that nobody really wants it.

JOB SNIJDERS: So which is it?

WILFRIED WOEBER: Wilfried Woeber, I don't want to answer that particular question, but I'd like to suggest that unless someone takes the baton and starts to move by the next meeting, we just propose to remove the geolocation theme from the database. Thank you.

NIGEL TITLEY: Yes, I would agree with you. Is that agreed, unless we see somebody doing something before the next meeting or by the next meeting, then we just administer a merciful kill on this? Yes, okay.

JOB SNIJDERS: Is there a volunteer to take this up or not?

AUDIENCE SPEAKER: Ray, no volunteer, but I agree with Wilfried, we should just get it removed. We don't need all kinds of other simple features in this database, just keep it as it is. Keep it workable.

NIGEL TITLEY: Yeah, okay. Well, let's give it a little bit more time and kill it next time if it really is dead, which I suspect it is. We don't need to kill it if it's dead I suppose. Bury it.

RIPE 70. This is just six months old. Action point 70.1: Discuss depreciation of plain text passwords in e‑mail. There was some considerable discussion of this. But, again, I can't remember what happened. Or was there not discussion? Am I imagining all this? Perhaps it was a nightmare. There wasn't any discussion. Okay. So this action point is ongoing. Unless we want to kill this one too.

Just a reminder that the Working Group consists of everybody not just the chairs.

70.2: RIPE NCC come up with a proposal for the status field to fix the requirement that certain objects may need multivalued status. This, I believe, also came up in Address Policy yesterday, so I suspect this is ongoing.

Yes, okay.

And I think that's it. Okay. Thank you everyone. I shall hand back to Job.

JOB SNIJDERS: Thank you Nigel.

Our next item in this session is our friend Tim Bruijnzeels with an operational update on the database and we have assigned ten minutes to this item and I will not allow any more than ten minutes. You have been warned.

TIM BRUIJNZEELS: Thank you. Job, actually, the word three items and they have been merged, but the total will be 30 minutes but I'll try to do them all in ten, if you let me.

So, life is all about priorities, as we all know. Maybe this is obvious, but I just wanted to go through this quickly. Where we're standing, the operational ‑‑ the operations of the database are really the most important thing. It just has to work. Then there is regular requests coming from this Working Group or there may be policies that get to consensus and we need to implement them. We also have other proactive improvements that we like to work on but this is basically normally the order in which we prioritise things.

Now, further with some details. What have been we been working on? This is a graph of the main tainters and SSO accounts. You'll notice that it goes up and to the right, so that's always a nice thing to have in graphs I guess. And you'll also notice that it starts to go up a bit steeper at some point andm well, that's actually because of this.

As you may remember, there was ‑‑ how should I phrase this ‑‑ somebody mentioned that ‑‑ well the hashes of the passwords had been publicly available until November 2011, and somebody said, well, by now people should have had time enough to change their password so unless something happens I will be publishing them. So we were forced to take action, and we did.

This is the amount of successful password resets or maintainers per month and by mid‑July when we actually did the research, we locked passwords that had not been changed it goes up quite dramatically and after that it goes down again a bit. So, we are slowly returning to normality I think.

We did send out warnings to people and gave people a link. We updated the functionality to support this so that now you can enter an existing password and a new password to replace it, so you don't need to figure out which hash matches your password, we do that for you. And we had a pretty high response rate to the e‑mails that we sent out about people. So that was actually a positive thing about this. But still, you can see about 30% of people did not change your passwords and we had a higher load in dealing with requests after the fact.

Then, some you may have noticed that recently we had an outage of the database. And this was due to the fact that a raid controller of the server, the master database server had a problem, and this also coincided as these things do with another problem that we had in another piece of our infrastructure, not really affecting the database as such, I believe, but in any case, that led to down time, obviously we don't want to have. We were already planning to upgrade hardware and we have been moving this forward now, so actually also in the past few months we have been ‑‑ not months, weeks, I should say ‑‑ we have been working on upgrading the OS and also upgrading the actual database server in our development environment and we plan to roll this out of production soon, as soon as possible really, this week we didn't want to work on it because during RIPE meetings we always have a freeze on making changes to infrastructure, but this would be the highest priority to pick up again after this week as far as we're concerned.

Maria DB may in future also allow us ‑‑ we still need to investigate this, I have to mention that disclaimer ‑‑ may also allows us to do master‑master setups so that we're less vulnerable to outages of a single master database.

Then, another big thing that we worked on this summer, this came out of Andrew's policy, is inter‑RIR transfers. Because we have these place objects and the RPSL maintainer, we needed to make sure that when things move in and out of the region, these objects are managed properly to avoid giving the wrong people access to your objects.

And doing this manually had the user risk that we thought was unacceptable. We needed to do some work to support these operations.

Abuse‑C. As you may have noticed, there are still some organisations without Abuse‑C, that actually do have resources allocated throughout RIPE NCC. There are a few reasons for this. First off, we do want to address this as soon as possible. The new member application process, I believe, was mentioned in the Services Working Group. Essentially, now, when a new member is created, we also create database objects for them including maintainers and abuse contacts. The previously an organisation would be created, the organisation object would have the RIPE NCC maintainer on it but they wouldn't have a maintainer themselves necessarily, so we couldn't create an Abuse‑C role object, unless we would manage it. So, a long story short, we believe that we are in a position now to fix this, and we believe we are also in a position to have a business rule to enforce this.

You may also have noticed that for ASNs, end user ASNs, the Abuse‑C is not always present. That is because we were working until this summer, on the 2007‑01 project find out the real end users of all ASNs. That was finished. But then we worked on the inter‑RIR policy implementation, inter‑RIR transfer policy implementation. So this work didn't get done yet. But, again, I think we're now in a position to pick this up as soon as possible. I would have one question there, but maybe it's better to discuss in detail in the list, unless there is time in the questions.

For PI space, we have been going through a process where we set to the sponsoring LIR and to the end user, like, this needs to be said and if it's not going to be said, we'll put the sponsoring LIRs e‑mail address on the abuse contact. I'm not sure that that's the best way to do it for ASNs, because they are typically ‑‑ well, I would say, that people are managing this themselves but also it would be much simpler if you could present this as a proven, if in my personal opinion, if I'm eye loud to have one, it is, because if you say let's make an abuse contact role object for you your normal organisation e‑mail address on this, this just allows you to use a different value there. If you are okay with this one, use it, if you're not okay you can change it. In any case, that's a detail I'd like to have worked out before we do the actual implementation. But, we can go either way. It's ‑‑ we don't really feel very strongly about you know if people feel that we should be put the sponsoring LIR's e‑mail address on there like we did with PI we can certainly do so, it will take a bit longer because we need to give people time to chase their end users.

Okay. Then we have been working on this, we did a release shortly after the last RIPE meeting. And for reasons that I mentioned, we were working on password resets, we were working on inter‑RIR transfers. The changed Phase 3 has not been actually deployed yet, but they are working on it in the office today. So, this is real soon.

One slight change that we would want to do, compared to the earlier plan, is that well we notice that now, even though a change is optional, almost everybody is still includes changed and update. So we're really afraid that if we reject change strongly, after Phase 3, we will get a lot of errors and a lot of tickets so what we'd rather do is, implement the switch that allows us to warn and discard any changed lines, well your updates will work, monitor how many people still use it and then eventually readapt T it's more work on the processing side for us but it should mean that these lines actually don't show up in objects any more.

When we can deploy this? Again, also depends a bit, because we are pretty close to done with this. Probably by the next week or the week after we could deploy something to IC, but there are also other changes and it may be useful to combine them. So let me go on to those.

RPSL, maintainer lock down. There is a lot of discussion about RPSL maintainer but this a very specific programme. To prevent that the RPSL maintainer is actually used on NMT by and for existing objects that have this, either remove them and leave it at that, if there is a remaining maintainer, or lock them all together. This is to prevent that anybody can just change these objects. I believe I did send a proposal to that effect to the list and I'm just looking for guidance, do we have consensus to go ahead with this or not? And if so, it should be included with the coming release. So together with changed 3, Phase 3. Then obviously that would take a bit longer but we can do it in the same release cycle and the same RC cycle so that might speed things up for when it's actually you know, live.

Same applies to the description line which was much to do about this as well. This also depended on the cleanup for 2010‑01, which is done.

The proposal was to cleanup instances where this wasn't forced and to convert the description to an optimal multiple attribute on all objects. Again the question would be like, do we really have consensus on this? And if so, should we include it in the next release or in a later release?

Now, another thing that we have been working on is SSO integration, and Alex will talk about this one a bit more in more detail later. Because, in general, we try to improve the user experience, and I think we do see that this has effect.

Currently this is deployed to the US‑Canada environment and Alex will show you a lot more, there was already some mention of it yesterday, we are essentially really guarding people into using their SSO account when using web updates and adding their SSO account to maintainers. We believe this is also laying the foundation for personalised authentication or authorisation, I always mix‑up when to use which word, but you know what I mean, I hope. There was a message sent to the list by Job about this, asking us to deploy a proposed way to make this work to the released candidate environment. Fortunately we haven't had time to do so because of all the other work. My question to this group would also be because we are eager to do this, but my question to this group would also be that if you feel that this has more urgency than some of other things that we also have on our plate, I'd really like to know, because otherwise we're inclined to do the other proposals first, because they seem to be pushed more from the Working Group, whereas this is more a proactive change that we think will improve things.

Then, well, to wrap it up, essentially, upcoming work:

Like I mentioned, we really want to finish the hardware migration because we don't want to run operational risks. I have asked for the guidance confirmation on the at least three items ‑‑ the description, RPSL maintainer lock down and the last implementation of change before we deploy it to RC.

There's also an open request that I talk about this, about removing status lines that reset, but that's ‑‑ okay, sorry, that might be a bit confusing here, but ‑‑ let's talk about it during questions.

Internally, there is a process that we're working on in the RIPE NCC to support merger and acquisition processes better because this is generating a lot of work. When these thing happen our IP arrangers need to make updates with database objects, so we need to support them to make sure that everybody works smoothly. Like I said, we have many many usability improvements lined up that we are really eager to do, but it's very difficult at this time to put a reasonable timeline on when we might get there because it's dependent on how all those other things progress.

Finally, this is ‑‑ I think this will be addressed later from the IR homing and possibly when Job talks about root authorisation but we are talking to other RIRs to see if we can find ways to have authorisation for let's say the AS from one database and the INET NUM from another somehow. We were not yet sure how to do this but we are thinking and talking about it.

That leaves me at the end of this.

So, I tried to keep this short deliberately. But obviously if you have any questions or comments, I would be happy to take them.

AUDIENCE SPEAKER: Could you please go back a few slides to Abuse‑C. Two or three things. First of all, we need to still deal with the legacy resources, so I hope I will, together with other colleagues around persons in the community came up with some idea, so that's the first thing.

The other one was as far as I remember, the original proposal and the implementation details, they covers the details, how this should be dealt with, this cases you describe. So if this, as you are going to be a responding LIR, abuse contact or from other else.

So, although, there would be ‑‑ although has details, I strongly encourage members of this community to make sure comments about that on the mailing list, without those comments we will have to follow the... and I forgot about the third thing. So thank you.

TIM BRUIJNZEELS: Like I said, we can of course stick to the original plan, I was just wondering if we need to double check before we execute the last part.

JOB SNIJDERS: My apologies that I said to you before you only had 10 minutes, you were right you had 30 minutes, so that leaves us with a couple of minutes for questions and comments. We are on schedule, and we will stay on schedule. Keep that in mind.

AUDIENCE SPEAKER: Rudiger Volk. Well, okay. Seeing the presentation, I got the idea that essentially triggered by the bullet saying well, okay, you are looking into the usability of RPKI and RPSL that probably over the next half year, it would be a good idea to run a small workshop looking into how we can actually have a systematic common plan for moving in sync and getting more in sync on both and I guess there are more aspects that can help there than just the usability.

So, that's not a question for you. I have one slightly nasty question for you. You mentioned somewhere the inter ‑‑ yes, doing something on the RPSL side for the inter region cross certification and so on. I would like to ask a question slightly off topic. At the GM a year ago, there was the question: Is the RIPE NCC going to contribute to documenting how the transfer, the resource transfer in the RPKI is actually managed? And the answer from the CEO was yes, it will be, I haven't seen that, unless you tell me Randy did not include a proper acknowledgements to you. And furthermore, I would like to report that looking at what is the inter section of RPKI resources in the root certificates of the various RIRs leaves about 2,300 inter sets of address ranges in the intersection between RIPE and an unnamed other RIR.

TIM BRUIJNZEELS: Okay... so... I'm not sure what I should answer on what questions you want answered exactly here. But ‑‑

JOB SNIJDERS: About the documentation of transfers and contact stuff RPKI. I think one clear question was: Where is it?

TIM BRUIJNZEELS: For people who don't know there is, or there has been ongoing work on this in the IETF for a long time. I have actually resisted a proposal that was brought up there to do this through a mechanism that is very close to the up/down protocol where people send messages that are signed, etc., to guide transfers. The reason why I resisted this is because I don't think it would be wise to end up with an IETF standards document describing how transfers work when this, to a large degree, is for us, I believe, governed by policy set in Address Policy Working Group. So that is one reason why I, and I believe well others can correct me if I'm wrong, why also maybe other RIRs have been reluctant to define that in the IETF too strictly. Now, this has been a problem because obviously when transfers do happen, you don't want stuff to break. So, Randy Bush indeed has been working with Rob Austin and George Michaelson and Geoff Huston on a document, it's not progressing fast, I will fully acknowledge that. I do not mind contributing to that document if also this Working Group, or the community, the wider community would like me to do so. I really have no problems with it but I do think it's really important that we don't end up in the situation where an IETF standards document is going to dictate how transfers are done in our region. So, maybe that explains why I had this reluctance initially.

JOB SNIJDERS: As promised, we will stay on schedule. Wilfried, I will allow you 30, 40 seconds and then we have to move onto the next topic.

WILIFRIED WOEBER: Okay. Some of the gentlemen here in the discussion have more than once used RPKI, IETF RFC and I'd just like to ask the community actually, whether we have a common feeling, including the RIPE NCC, that it would be about time to have a look at the existing set of RFCs which have completely been made outdated by moving around with the object formats and that sort of things on the one hand and on the other hand there are some old obvious RFCs have never been implemented. If we look at the ‑‑ when you are looking at authentication, authorisation, whatever methods, inter‑RIR is actually sort of in parallel or maybe even contrary to what the original RPSL architecture was envisaging, so, it's a question to the community and to the NCC in particular is: Whether we would have the feeling to revisit the RFC set the and either consciously declare all of them to be historic, overtaken by events, or whether we want to at least update some of them to get them in line with reality. Thank you.

JOB SNIJDERS: Nigel, if you would be so kind to note down as an action item. To review IETF standards in context of our current work, and I believe Wilfried is volunteering to help us do so.

WILIFRIED WOEBER: If there is an interest in the community, I will probably have a little bit more time next year to maybe look at that.

JOB SNIJDERS: Thank you.

One last remark from William.

AUDIENCE SPEAKER: William Sylvester. We already have a protocol its called EPP. It's already currently used for the transfer of domain names, already been profiles for IP, iNetNums. Is there any work ongoing now to automate the process or you know programmically enable transfers?

TIM BRUIJNZEELS: Between different RIRs. No, we are still exploring, so I'd be interested to learn about anything that might work there, we do have some ideas, but... nothing complete.

AUDIENCE SPEAKER: Again a lot of this work was done 15 years ago and the whole competitive markets existed around domain names and IPs were retro fitted in the early 2000s to sport same protocol. It's worth at least taking a look at.

JOB SNIJDERS: Thank you, Tim, for your updates.

(Applause)

Next up is the infamous Alexander Band, with an update on documentation UIX stuff.

ALEX BAND: I'm Alex and apparently I am infamous. Thank you or not, I don't know.

I thought the documentation could be better because right now, we get a lot of questions in customer services on how to use the RIPE database, and it's in a lot of cases about really basic stuff. There is the reference manual, we have different support documents. A lot of functionality is actually written down in the labs article which is more like a blogging style and then it sort of disappears in the archive and makes it really hard to find. And then we have long, long lists of FAQs. Some of which may actually be frequent and others may be very, very infrequent, it is usually just a very, very long list.

Especially with regards to the reference manual, when I read it, it always sort of feels to a use a card analogy, you read the operating manual for a car and it talks about internal combustion engines and differentials and all that kind of thing. It doesn't tell you how how to use it, to achieve certain goals.

So, although I do think there is absolutely a place to have the reference manual, it's really really important to have, in addition a lot of people would like to achieve certain things in the RIPE database and just have a document that explains how to do it.

So, I just set out and I started writing, based on everything that comes in at customer services with regards to tickets, all of the feedback that comes out of the training courses and really focus on making stuff readable and giving it sort of a how to feel. And I also thought that it needed to be in one place, because a lot of the functionalities now scattered all over the place in just single, give it a single home. Which is ripe.net/DB /support. This is where all the documentation will go.

This is what I wrote up to now. That's a lot of typing, I tell you.

I wrote on protecting all of the data in the RIPE database. I have explained the differences between NB 5, SSO, PGP, etc., etc.. I have a document on the RIPE database business rules because that was never documented ever before. Because certain things cannot do in the RIPE database such as changing your company name. You will get an error message. Or create overlapping objects. So too INET NUMS that actually have the same status. A lot of things are locked down in the database to make sure that we ‑‑ that there is data integrity etc., etc. But it was very difficult for users to understand what these restriction resource, so there is now finally a document that explains all of them.

There is also one on manage the route objects because that is very, very complicated to a lot of use, because of all the different hierarchical organisation that is come in even when authorising stuff that is within the rhyme region but then if you start looking at the creating a route object that refers to resources outside the RIPE region that is even more complicated. Then all the different query methods ranging from web updates to using TelNet, configuring your first DNS, all in readable formality. And lastly a document on maintainers, explaining the differences between a maintainer that you would use to protect your personal objects, such as your own personal object that you don't want your colleagues to have access so. And things like a shared maintainer that you would use for your entire organisation to protect, for example, all of the assignments that have been created in the RIPE database.

So, a clear explanation on what the maintainer structure is. How it works hierarchically, and what the difference styles of maintainers are that you can use.

Of course I sourced this from internal tickets that we get, questions that you get in training courses, etc., etc. But if you have any other feedback on what could actually be documented and what would be super interesting to have, then please let me know. Because I'd love some suggestions on this. We were doing the tutorial on Monday, Natalie and me, where we talked about people wanting to understand the differences or historic differences in RIPE database objects. And then I realised that this is actually functionality that exists, but you can't find it anywhere. If you really know what to look for, then ‑‑ and you Google for, it you will find the labs article that explains how you can get a diff of two different database objects. It is possible, but it is impossible to find out how.

So this is another one I'm going to write. And the list goes on and son. And, again, if you have any suggestions on what you would love to have documented, please receipt me know. Because this is actually would be fun doing. And it gives you immediate value.

So that's the one thing I have been working on, sort of a side project, but it turned out to be a bit bigger than anticipated.

The other thing that we have been doing is we have been trying to improve the web updates interface. It's been in sort of a static mode for years, and it wasn't really up to date with all of the other services that the RIPE NCC provides. Because, what we have been trying to do for the longest time, is get especially SSO as an a single sign on system integrated throughout all of the RIPE NCC Services that we have. You go to RIPE database.net, you log in with your access credentials in the top right corner and you can essentially use every RIPE NCC service. You can look into the LIR portal, chuck your Atlas measurements and you should seamlessly go into the RIPE database and change objects that you control. So, integrating SSO more into the website is something that we try to bring more to the forefront. Because it has been possible to use single sign on as an authentication method for about, more than two years already. But, the number of people using it was very, very low, because it wasn't really integrated very well. You had to edit the maintainer object, add an additional auth line and you could use it but it didn't give you a lot of benefits.

So what we tried to do now, and this is functionality that currently lives in RC and will go to production next week. This functionality is just about the web application which is the front end for the core WHOIS database. So everything that you are going to see right now does nothing to core WHOIS. So, no functional changes are made to or WHOIS and everybody who interfaces with the RIPE database through some other method, then web updates is not going to go know difference. I want to make that clear because it may seem like we're doing lots of really fundamental change to the RIPE database when all that we're doing it making changes in the web application that is front end.

So once you have logged in with your access account, you have made your changes, we will ask you to authenticate using well, most people still have an MD5 password, by far the majority of people that use this an authentication method. But because you are logged in it will say okay we have successfully processed your update. Would you like to associate your SSO account with this particular maintainer? And if you say yes, then the authority line is automatic added to the maintainer, so from that point on you can use your SSO account in addition to the MD5 password. Your colleagues can go through this process as well and once you are all on there and maybe you no longer have a need for MD5 you could even remove it from the maintainer. But it also means that you are never again in a situation where you have to ask the RIPE NCC to reset your MD5 password because there is at least one additional authentication method on the maintainer that you can use to make changes to it a password reset for an SSO can always be done by yourself. In addition SSO will give you two factor authentication for additional security.

So, the other thing we want to do is give more focus to the maintainer. As I explained earlier, when I talked about the documentation, people have a lot of problems understanding the differences between maintainers, what they are used for, how they are being used. So, we lifted out the maintainer and put it all the way at the top and we highlight the different authentication methods that are available for your maintainer and we also star all the maintainers that have already been associated with your SSO account making it much clearer for people which maintainer is going to be give you which effect, and sort of help people make a decision on which maintainer should be used on their objects.

Then, also we just improve a lot of syntax check that go we do. Even before you submit the objects to the RIPE database, so you don't have to wait for a response. It's a little hard to see in the screen shot but essentially we do a lot of client side validation on what you filled in is correct. Also this is confusing for people because we don't have UTF 8 support in the RIPE database and some feel it's only allow plain ASCII, and other fields only allow Latin 1, so you can only use plain ASCII for your name but you can use Latin 1 for your address. It's all very confusing for users, so we're trying to be a little bit clearer in what you can and cannot do.

The same goes for things like auto complete. Choosing the right org, choosing the right text C, choosing the right admin C, it's easy to make a typo. If you use auto complete, it will list your name and organisation, its much clearer on what you are selecting.

Then the other thing is one of the biggest ticket generators in CS as well, I was surprised, but if you have like a personal object and a maintainer, just a personal maintainer pair that both reference each other, trying to threat one or the other is next to impossible. Now the functionality that we have created in web updates is if you try to delete a personal object, the only reference that it has is to a single maintainer and that maintainer only points back to that person. We will delete essentially the pair. So you can select a person, you can select a maintainer and it will automatically delete both of them.

Also, it will give you better notification of blocking objects. We're looking into making this functionality available in all of the other update methods as well, like the API and sync updates etc., etc.. because, everybody knows that you can only delete an object if it's completely unreferenced and sometimes you may think it's unreferenced and you try to delete it and you get an error message, but it won't tell you what the blocking object is. Now we do. We just show you okay these three objects you still have, are preventing you from completing this object so if you change your references in those, you are good to go and you can try it again. So we're trying to be smarter in helping people work for efficiently.

Lastly, one of the changes that we made is partial route object creation. If you want to create a route object that refers and out of region resource, then before in web updates we just throw you an error message it would seem like it didn't work but in reality what would happen is would be left in a pending state and this pending state would mean that the other resource out of region resource holder would need to submit the exact same object with their authentication. And in those two parts would be combined and it would actually create the complete route object, so there would be no need any more to ask the other person to put and maintain routes on their AS numbers so you can create the out of region route object. That software thing. Or the need to create a dummy object for example. So now in web updates we are really really specific in what it is that ‑‑ what it is that is expected of you when you create a partial route object. So, if you refer to a resource that is, that has one of the resources within the RIPE region and the other resource is either not managed by you or lives outside of the RIPE region, we tell you exactly what to do. So in this example, it is a resource ‑‑ a reference to an AS number that you don't have the credentials for. And it will tell you okay, it is partially created, in the other part it needs to be authenticated by this exact maintainer, that is controlled by these people. And these people have received the notification e‑mail that they need to take action in order for this route object to be created. So it's really clear in the messaging giving you exactly what you need to do in order to successfully complete it.

And that's it. Within time.

RUDIGER VOLK: Thank you, and I have a question or suggestion. Exactly to your last thing. The partial updates do not fit our established process which is something we can change, but one of the missing informations is, I really would like to see identification of the person requesting the damn thing. Someone is requesting to create this. I don't know whether this is an enemy ISP who is trying to do something really bad, or is it some representative of a customer I have at the moment, or who is it? For some of the messages that come back as mails, essentially the same question applies in the good old times of mail submitted change requests, we usually did see at least a mail ‑‑ the mail, or, other word, in the good old times when we had the changer attribute, usually the change attribute was telling us something.

ALEX BAND: Okay. I have a return question. Because, currently an e‑mail is generated saying okay, there is a route object being created and the state is currently pending. It gives you the object itself. It gives you the maintainer that is missing on it. But ‑‑ like you say, it doesn't say from whom the request is coming. But it requires you to sort of copy and paste the object and send it back in an e‑mail. Is that even convenient? Would you rather receive an e‑mail that has a link that you can click or is that too scary?

RUDIGER VOLK: Well, kind of offering that as an option is fine. I don't have a lot of trouble using the e‑mail, but as I mentioned, the process does not match our process, so all of these partial requests go into the dumpster anyway with us because we will, if we can if we can side identify who is behind this we can explain what our process is which works slightly different and which worked before you invent this.

ALEX BAND: I understand. Let's talk more about this later. That's good. Thanks.

AUDIENCE SPEAKER: Alex, thank you for your hard work, it is very enjoyable to see you reach out to the community through IRC while developing the documentation. So keep it up. I agree with your future request that identification would be interesting, so let's take that off line...

In light of our schedule, I think there is something from the room. You have a remark...

AUDIENCE SPEAKER: A small clarification, yes. I just want to make a quick clarification about the RPSL maintainer here, because when we ask you to type passwords in your SSO accounts to maintainers, we avoid doing that for the obvious maintainer, so that is in this work flow implicit, if you could use the RPSL maintainer, it will do it for you, it will also enforce the best practice not to put that RPSL maintainer MNT by.

ALEX BAND: And also you will not get a prompt to try to associate your SSO with the RPSL maintainer. You know, details. That's why it's pretty complicated to make this stuff.

AUDIENCE SPEAKER: MAT parker RIPE NCC. A comment from a remote participant named Denis. I have two comments from Denis. The first one, he says, chapter 16.12 of the reference manual explains the queries for historical data including the diff command and has examples of its use. He also mentions that pending route creation is only for resources in the RIPE DB, it does not work for out of region resources unless there is a copy in the RIPE DB.

ALEX BAND: Yeah. Well, great, I couldn't find that.

JOB SNIJDERS: Thank you, Denis, we miss you here, next time come to this meeting. We have to move on.

Our next presenter is going to tell us about some analysis where comparing what is stored in our database to what is in BGP. I am very curious to learn what comes out of this.

TOMAS HLAVACEK: I'm going to report on my effort on comparing routing policies written in RPSL in RIPE database, to real operations that was of BGP, or in BGP.

So, what I have done is, I have started to collect data in 2012. I am collecting BGP table dumps and RIPE database snapshots on databases and I compiled and analysed this in July this year. Actually it's only limited to RIPE database which means it's only considers resources that belongs to RIPE NCC service region, and another resources is either autonomous systems or IP prefixes from another RIRs are marked as unknown or undecidable in my results.

The analyser tool is Open Source. It's written in Python, you can download it from GitHub and play with it with your own data or your own BGP feed.

Because, apparently the results are you know dependent or ‑‑ results depends on your BGP feed and on your relative position to actual route originators.

The data analysis tool produces three basic, or three groups of outputs. The first one is sort of volumetric, it creates interesting numbers but it's relevant for this talk.

The second group is simple check of BGDp origins to route and Route‑6 objects in RIPE database.

And the third group, which is maybe the most important and most interesting, is actually check the AS paths. I mean, check of each route investigator investigator in BGP and traverse to AS path of that path investigator and checking each AS from the perspective whether it has aut‑num object in RIPE database, and if yes, whether the filters in import and export lines matches for that particular route from the import and export side, or from ‑‑ from the proper autonomous systems in AS paths. And it actually brought out details output so you can look up particular prefix at any time. Within my data window of course and you can see whether it's valid or not and what was the problem.

So, let's look at the results.

First one is origin validation for IPv4. I do the last data point I have, and actually 77% of BGP routes that belongs to RIPE NCC service region, they are okay with this, or they are in sync with their route objects, but actually over 20% failed to match with this route object and the most disappointing number here is the 4.2% of AS origin, or route origin failures, because it actually means that there is a route object, or there are route objects, one or more, but none of them matches the real originator in BGP, and it's over 5,000 prefixes, so it's quite a lot, at least from my perspective.

We can also look at the timeline which means that ‑‑ actually that line shows ‑‑ this is the Violet line I think ‑‑ it shows the prefixes that matches the route objects. Those two lines in the bottom shows the errors, and this is the undecidable group, which means outside of my scope, outside of RIPE database.

As you see it's pretty consistent, it's almost flat, but still it's rising.

Regarding IPv6, numbers are quite similar. Actually, there is less ‑‑ there are less autonomous systems that fails due to different origin, but maybe it's because there is less legacy or something like that.

Regarding the timeline, it's more interesting because it's rising, and the good news is that the line for matching prefixes or matching path vectors is rising more rapidly than the errors. Regarding the AS path check the observation is my thing has provider that operates outside of ‑‑ or that operates using resources outside of RIPE service region, which means that I'm not able to decide on all routes that are coming from another parties within RIPE NCC service region but through upstream provider, so these numbers are actually considering only my peers. And as you can see, it's over 60% failures in the path verification, and actually failure in path verification means that at least one autonomous system in the AS path failed either import or export filters, so it's quite horrible number. But as you can see on the timeline, it's a bit noisy because it's fluctuating ‑‑ or it's really dependent on BGP fluctuations in year ASs to my observation point. But actually if you want to look past the upstream provider that's out of my scope, you can look at the hops and by the hop I mean here, the transition from one AS to another in AS path. And you can check the hops in RIPE database and then I can look at ‑‑ or most of the resources that belongs to RIPE NCC service region and as you can see, it's almost 50:50 those failures and volume date or volume dated hops.

Which means that, it's also disappointing, at least for me, and then there are numbers for particular failures like whether it was import filter filter failure or export filter failure or whether the filter was completely missing.

Maybe it's more interesting to look at the time lines. As you can see, it's dependent on fluctuations in BGP and actually we also changed the upstream provider, so it is the reason why there is such a huge change here, but actually, those four lines, it's really not visible, but there are four lines near the bottom that represents the errors. Whether it's import, export or missing errs errors. And those four lines, sums up to the 50% or so ‑‑ 50% of resources that failed out of definable resources from RIPE NCC region.

It's also interesting to look at where the failures happen, and actually the interesting point here is that most of the failures happens within the first five hops, or within the first five ASs in AS path, and I think that the reason is that the path to the first five ASs is outside of Europe and then it's undecidable for me. So, if you want to look at those numbers and charts in more detail, I'm going to have a link to complete results so you can browse through charts or through text outputs and there is more information about it.

Regarding IPv6 numbers are better. Maybe it's the reason because it's only about my peers and actually I have been lucky to have peers who cares about in their RIPE database objects.

And this was the validation timeline for IPv6. It's not that different from IPv4 actually.

So, what is going to be conclusion: I think complete results reveal much more interesting facts. So, look at them. If you have time of course. And apart from that, I have more questions than answers. First, is whether it makes sense to extend this work to the vote Freezone and other RIRs? And the second question is: What we can actually do to make the situation better? Whether we need more software or more documentation or whether it makes sense to really focus on replacing RPSL with something new and modern and better and better for understanding perhaps?

So, that's it. I would be glad to discuss this during coffee break or during questions. And that's it from me. Thank you very much. This is the link for the complete results.

(Applause)

JOB SNIJDERS: Thomas, thank you for your presentation. I find it fascinating data when you sent your proposal for a presentation I was quite surprised like, wow, somebody is doing some science like work in this area, and I believe number crunching like this will help our Working Group to make better decisions, because we're more informed, so I really appreciate the efforts you are putting into this.

So aside from my compliments, are there any questions for Thomas?

RUDIGER VOLK: Strictly to stay with a question, when you do the origin validation for route objects, that means you are exactly matching? If there is an aggregate route object in, and you see a more specific, would your observation of the aggregate kind of validate the more specific even if there is not an exactly matching route object?

TOMAS HLAVACEK: In those charts, no, it's actually exact matching, but I have more detailed data.

RUDIGER VOLK: Kind of in the real world, I note there are many people who think having the aggregate registered justifies everything ‑‑ all the more specifics. I disagree, and I am happy we agree, but that difference actually may expect some of the stuff carrying. Carrying on from the question section. I quite certainly think it is much more useful to include RPKI data into the evaluation rather than figuring out which of the 85 unknown RIRs could contribute what good information as opposed to garbage that we will find everywhere.

TOMAS HLAVACEK: I remember right. Okay. Thank you.

(Applause)

JOB SNIJDERS: So, next up is Michael owe do you from AfriNIC.

MICHAEL ODOU: So, good afternoon, I am software engineer at AfriNIC. And I'm going to present the AfriNIC routing registry and home project. First of all a quick look at the current status. The route registry is in production since August 2014. It is integrated to the WHOIS, and in this version, the prefix and the origin of the route object have to be in region. It is also mirrored by APNIC and NTT and RIPE.

The next version of the routing registry is an interesting phase. For this one we decided to split the input. In this version, during to different business rules, the origin might be not in AfriNIC region. And we also had GRS on these.

So the benefits: Apart from being a free service to the community, it seems audit checks are done in the WHOIS database, we can make sure that only AfriNIC hostmasters will create aut‑nums, because aut‑nums will be protected by AfriNIC the only the owner of the prefixes can create the route object. And seeing as the route objects are tied, linked with the aut‑num, we have a reduced risks of hijacking.

Considering the logic of the route registry, first of all we want to improve the adoption first of all by providing an instance to build filters. The you can contact the providers to tell them, the AfriNIC ‑‑ to bring the filters and second we also want to facilitate the creation of route objects, for example, route objects that have an origin that is not in AFRINIC region. This is the first.

And the second is that we also want to improve the numbers to migrate from RIPE. Currently there is approximately 40,000 objects in the RIPE database, and we need an AfriNIC administered prefix. 34,000 possibly with AfriNIC ASN, so that's why also we have these project, because we want to make sure that the objects belong to the right registry or put in other words to make sure that people in AfriNIC region use AfriNIC routing registry.

So you can see here that I was saying, we have 33,000 objects that have an AfriNIC prefix and an AfriNIC ASN. This objects in the green case is what we call the clear cases. This is what we can focus on. The other objects in particular it is almost 6,000 objects that have a RIPE ASN, won't be considered as, in the first phase at least.

When it comes to creating objects in AfriNIC routing registry, if the prefix and the aut‑num are in the region, the authentication is done for the ‑‑ if you have the for the prefix and the aut‑num, then a simple authentication is performed and usually the corrosion is approved. In which case we have the same mention for the aut‑num and the prefix, then of course the two maintainers have to be authenticated separately before the creation is done, and of course, we might have a pending creation process. Just ask as Alex described before.

In case we have the prefix in AfriNIC region but the aut‑num is not in AfriNIC region, then these objects will not be imported. The registry can handle that kind of objects but since we have not yet decided the process on how to authenticate the mention of the ASN, these objects will for now at least, remain in RIPE database. So, in this case, the process is still to be finalised and in case the prefix is not in AfriNIC region, then simply we don't authorise the creation of the route objects.

Now, continuing the AfriNIC adoption. We had some BoFs possible to inform the community that we have a routing registry; and second of course, to encourage them to use it. You will see here that we have some results because ‑‑ almost 600 route objects that we have in our registry, 139 were created locally. The other ones have been imported from RIPE or RADB. And we also saw that we have approximately 2,000500 queries on the AfriNIC registries. Which means that the routing registry is quite use bud maybe not enough.

And to improve this and this is also part of the homing project, we organise boot camps. In these boot camps the objective of course is to help our members to importer the objects from the other routing registries. And boot camp resource split in three faces. The first phase is where we get the objects from the our routing registry and sanitation, by sanitation I mean mainly changing the source and having a correct maintainer, not the RIPE maintainer first of all. And the second part is to create a list of these objects with the different organisation.

First of all we announce on the member list that we are going to organise boot camps and everyone that is interested can express interest by sending an e‑mail at AfriNIC.net which basically is a queue.

So when it's done, we check whether the member has, object in RIPE or AFRNICK, APNIC or RADB. It means that the prefix is in AfriNIC and aut‑num is in AfriNIC region also. Then also we invite the member to have some information ready, like, the passwords of the maintainers and we make sure the objects are sanitised.

The last phase is the boot camp itself. During which we create further objectives to the members, we ex IP plain what we are doing and why. We invite him also to use the migration tool because using this migration tool he might have to do some you be dates on the objects and modifications before the object is really imported in the routing registry. And we also encourage them to use the tools that we have at their disposal like my AfriNIC and web update in case they want to further update the route objects.

They will also receive some information about the mirroring mechanisms and constraints. You have to specify the source for example. And then we also encourage at the end of this process to clean up the various registries, remove the objects from RIPE database to most of the time avoid inconsistencies. There is a tool, that they can use for this, to check whether there are inconsistencies or not.

So, the next step.
Based on adaption of the migration plan that Tim will present by our committee, we will present, we will communicate to our own committee first of all at AfriNIC 23, which will be at the end of the month, at the end of November in Congo, in which you are all invited, but also, using various media like the website, the mailing list, capacity building sessions, training, etc. The goal will be to seek feedback from the AfriNIC community and see with them whether they are in line with the plan that will be presented.

So that's it. Thanks for your attention. And if you have questions, I will be happy to answer.

(Applause)

JOB SNIJDERS: Before we move to questions, I want to invite Tim to offer his perspective from the RIPE NCC side, because then we might already answer some of the possible questions. So, please hold your horses for a couple of more minutes, and we'll discuss this after Tim is finished.

TIM BRUIJNZEELS: Well, this is not just the RIPE NCC's perspective. We discussed this with AfriNIC actually. But basically, we were asked for a concrete proposal on how to move forward with this. And as already mentioned by Michele we wanted to focus on, let's say, the simple cases, what the ASN and prefixes in AfriNIC. Because, well, they are the easiest to handle. And we would propose a three‑step process. First of all, we want to make sure that we communicate this all over the place, because people would have to start adding the AfriNIC IRR to their tool chains and also people need to know they need to put data into the AfriNIC IRR.

Then as a second step we would propose to freeze ‑‑ well, to disallow the creation of new objects in the RIPE database that have AfriNIC, both prefix and aut‑num AfriNIC, this is something we can verify using business rules.

So, essentially you would get an error message saying you have to go to AfriNIC.

Then, as a final step, we want to clean up the data from the RIPE database. What happens on the AfriNIC side is also obviously up to the AfriNIC community, but what we're thinking of is that for those people who did not already migrate their objects and possibly sanitise them, remaining objects would be imported as they are, and well INET NUM holders, ASN holders presumably as well, in AfriNIC would have the ability to delete objects that they don't want. Also, we believe it shouldn't add any more problems to these people because if they already exist in the RIPE database RIPE, IRR, then they are already there so matters wouldn't get worse by moving them over.

In parallel to all of this, but maybe it would be good to keep the discussion separate I believe, we want to explore what we can do about other cases and for us specifically, it would be sufficiently 6,000 objects where it's in the AfriNIC prefix and a RIPE ASN that we need to consider. But, maybe it's more interesting to discuss after you have done your next talk. Anyway...

We are really asking what people think of this plan? Do we have the go‑ahead? Implementation‑wise, we believe that it shouldn't be, well for us it shouldn't be too complicated. It's really about do we have a go‑ahead for this and then can we set a timeline? The three months there are what we believe is a reasonable time, but obviously that can be discussed as well. I don't have a questions slide, I just have one slide. But this would be the moment for questions and comments and everything.

JOB SNIJDERS: Thank you Tim. So, dear audience, please realise that if you have any disagreement with this plan, now would be a good time to voice that disagreement. If there is ‑‑ if nobody offers opinions to the contrary, then we will assume that this is a brilliant plan, that it will improve the quality of IRR data available to our global community, and it will be set in emotion. So, you know, use this ‑‑ this is the good moments, not only this meeting of course, but also these weeks on the mailing list, please give your feedback, don't wait half a year. And with that, I open up the microphone. Marco.

AUDIENCE SPEAKER: Marco. Just a quick comment on the last point. My random ‑‑ my educated guess is that when there is an AfriNIC prefix registered in the RIPE database along with a RIPE ASN, probably this means that this prefix is actually being announced in Europe, so probably it will be better, at least I expect that in the general case, it will be more useful to leave the object alone. Maybe it's a good idea to warn the maintainer that there is the option of moving the object to the AfriNIC database, but probably it's more useful to have it in the RIPE database because it will be used by European operators that may be only used the RIPE database to validate their neighbours.

TIM BRUIJNZEELS: Well, if I can comment on that. Yes, that's one perspective. But on the other hand, in the RIPE database, you can have authorisation for the ASN, and that's already there, so in that sense, yeah. But, it's hard to have the authorisation for the prefix and prefixes get transferred and all that. So, you could also make an argument for the other case saying it should be where the prefix is, because that stuff moves around a lot more and for the ASN you just need to check with one party.

Like I said, we are thinking and discussing. I don't have the silver bullet, but I'm kind of more inclined to ‑‑ personally I'm more inclined to say that it should be where the prefix is. But... yeah, like I said... it's... not that I have the silver bullet here.

AUDIENCE SPEAKER: Rudiger Volk. Well, okay, first, Michele, thanks a lot for coming a long way, and this looks good, and I don't have my usual objection. However, for Tim, I would ‑‑ looking at your slide, I would have a small operational question. I'm reading there that I should put AfriNIC as the source in my queries or some other string?

TIM BRUIJNZEELS: So, I think you have two options. You may just want to go to the source and query AfriNIC directly. We do mirror, so you could look at the mirrored objects that we have for AfriNIC if you want, but that's an operator choice I think.

RUDIGER VOLK: Kind of of the question is slightly poisoned. When I ask for AfriNIC source on the RIPE mirror, I will get the AfriNIC objects. When I ask for, say, RADB in the RIPE database, I don't get RADB.

TIM BRUIJNZEELS: Well then, that's maybe something we can discuss later, because in my mind we do mirror; and if there is a problem with that, I'd like to know what that is.

JOB SNIJDERS: This is ‑‑ the operational concern that Rudiger raises, just the mere fact that there are operational concerns is precisely the reason why there are multiple phases in this project to kind of make it easier to transition and we do encourage people to assess the way they generate route filters and see if this will impact their process or not. Because there could be operational impacts, so keep that in mind. Wilfried.

WILIFRIED WOEBER: I'm not trying to speak in favour or against the proposal. It's more like a comment, because I do see the beauty of it and I do understand sort of where the motivations are coming from. That's out of the way.

My comment would rather be if I am looking at the various activities both at the RIPE NCC and in our community, as well as that one and this is sort of one side of the game; the other side of the game is the ISPs trying to come up with the raw data to properly configure their networks. My feeling is that all of those things are good on one hand, but on the other hand, we are more and more fragmenting the set and basis of raw data to configure the global network and the global network does not really care about whether an AS is sort of authoritatively registered in our region or whether an IP block is registered authoritatively in the AfriNIC region. And sort of my fear, actually, is that by making it more and more complicated for the ISPs in requiring them to query more and more and more raw subsets of the data is maybe detrimental to the goal that we have as the RIR community could come up with a globally unified set of raw data to configure our networks. That's just something which is spinning in my head. I'm not saying this is bad, but we are moving ‑‑ instead of moving towards a globally uniquely unified accessible set of raw data, we are moving to more and more siloed type things. That's just for discussion.

JOB SNIJDERS: I appreciate your point of view and you are right in that we encourage fragmentation, but it's driven by the common desire to validate the data and by fragmenting or creating silos, we have guarantees that within the silo, there is a form of validation or authorisation that is not available for AfriNIC space in the RIPE database today.

RUDIGER VOLK: Actually, I disagree with both of you. The thing is, if I have some customer who has ‑‑ who is actually supporting someone in Africa, it is absolutely not clear whether the objects at the moment legacy server to gather from all RADB ‑‑ and not ARIN usually ‑‑ and so on, actually having AfriNIC establish a clear home for the resources of that continent is actually kind of the opposite movement to fragmentation. It kind of the fragmentation of the RPSL database happened 20 years ago when people said centralising this is a no‑go, while on the other hand, even at that time, the RIPE DB already did demonstrate that things could be better organised and unified.

And my take is that fundamental flaw in RPSL is something that is really hard to cure, so, my guess and my idea of direction is that, yes, we already have the new homogenous global collusion in place, it is not available to all countries and so on, but well, okay, the RPKI actually is the unified thing, and it's there, and the uptake, unfortunately, doesn't happen as quickly as some people think and as some people fear.

JOB SNIJDERS: Thank you. My co‑chair is signalling furiously about his watch. We will continue discussion on the mailing list. Think about the operational impact this might have on your network. Think about down sides or the benefits, and ‑‑ well, thank you for flying over.

(Applause)

I think I'm next up with a short presentation.

Problem statements about route object creation as it stands today, I have spoken with lot of foreign networks ‑‑ and with foreign I mean networks that have their autonomous number number assigned by a non RIPE RIR; ARIN is one of the popular sources of these foreign networks. They expend their business to Europe, they come to Europe, they become an LIR, they get a /22 and they want to originate that /22 from their American‑based autonomous system number. Their European upstreams might ask them to register a route object somewhere. Some of them will recommend, hey, I want you to register this in the RIPE database because it's RIPE IP space, and the network operator ‑‑ the foreign alien one ‑‑ discovers that it's quite complicated to get anything done in the RIPE database if you don't have a RIPE managed ASN.

Because this is basically an overview of the current authorisation structure in the database.

There has been discussion about this on the mailing list. There has been a sort of intra‑meeting that was piggybacking on NANOG, but so far I cannot conclude that a form of consensus has been reached.

We do not currently seem to converge on one of these methods, namely we could follow the APNIC model where we say the origin AS no longer need to approve route object creation. Or we could make a variance of that approach where we say, only for out of region AS numbers, we do not require authorisation, but for anything that's within the region, both RIPE ‑‑ both IP space, both ASN as RIPE managed, we leave it as it is today. Or, we can create an opt‑in/opt‑out mechanism where a RIPE managed AS number could signal somehow that route objects cannot be created without their express prior approval. Or, we build a variance of Danis's proposal in which she said we could do e‑mailed based validation and that we, for instance, use that concept for out‑of‑region AS numbers. There is a lot of methods to move this forward. And we're not converging on a single method so far.

In this context, I want you to keep in mind that it is my experience that it's mostly foreign networks that seem to have an issue. So, if we create a mechanism that only applies to ASNs that are not home to RIPE, that might be, you know ‑‑ it might make the change smaller and it would accommodate the precise group of people that are complaining about the difficulties they have today.

So, I would keep in mind that we try and serve foreign networks in this context.

Should we want to do changes in this area, I would like, maybe, a volunteer that can help me, you know, try again to come up with a proposal where we have a meeting of the minds and converge on a single direction. So, if there are volunteers to help me in this problem space, please step forward. If not, I'll just lonely track this along. Marco? Excellent. That's a lot of people ganging up there.

RUDIGER VOLK: You may not like the proposals.

JOB SNIJDERS: Let's start ‑‑

AUDIENCE SPEAKER: This is not ‑‑ we run out of time. So, just keep it short, we are right now in ‑‑ there is a break ‑‑ so ‑‑

AUDIENCE SPEAKER: Nick Hilliard, what we are trying to do is we're trying to stop abuse from people who are messing around with the RIPE database, and which ‑‑ and wherever that's causing problem for other people. The RIPE database actually have an acceptable usage policy, and I don't see any reason why that acceptable usage policy doesn't have a statement about fraudulent use of the database.

And separate to this, I also don't see a particular reason why the RIPE NCC shouldn't implement basic heuristic mechanisms to look at the sort of data that's coming in and to make sensible operational decisions about whether they are credible or not. So, for example, if an ASN from someplace in Europe sort of suddenly starts registering IP address space in Bangalor or South Paulo or Chile or whatever it is, within a very short period of time, questions need to be raised about that. And if that sort of thing needs to be flagged so that the maintainers can either be locked out or an acceptable usage policy mechanism can be applied in some other way so that that abuse can be stopped.

So, I don't think we actually necessarily need to get into creating policies for handling this. We already have an operational mechanism for handling it and it's the acceptable usage policy.

JOB SNIJDERS: I would like to follow up with you after this meeting, I think we're talking about two slightly different things here.

NICK HILLIARD: We are talking about two different things but the thing that we're talking about at the moment we're only talking about because there is abuse of the database.

JOB SNIJDERS: No, this is not about abuse, this is about legitimate users trying to insert an object and they cannot do so without creating a copy of their aut‑num in the RIPE database. But we are running out of time. So, we have to follow up later.

RUDIGER VOLK: Can you go back one slide?

I would like to ask as a constructive proposal, should there be another item that says "enable operational use" and integrate the operational use of RPKI ROAs in the RPSL context which would allow any holder of address space essentially to issue the authorisation for any AS regardless of where it is homed? That would not need creating additional functionality in the databases. It would open up the question, is the access to the good RPKI data easy enough and what tools may help to make that access easier and would the community actually consider that path of approach as something that they actually would like to take, or is the approach of the community to say, well, okay, RPKI, we do not trust for some reason, and so it is not offering any solution.

JOB SNIJDERS: I think our community is willing to explore any and all options. I just identified from my work that there is some issue at this moment. We had four solutions, now a fifth, so we're even diverging further. But something has to be done, in my opinion, because there is an actual issue and if RPKI would be beneficial in that regard, I'm interested to further think about that. Tim.

TIM BRUIJNZEELS: I definitely want to work with you and anybody else on this. I also want to talk ‑‑

JOB SNIJDERS: You are paid to to do so.

TIM BRUIJNZEELS: But I also want to talk to our RIRs about what is technically feasible, they are also a stakeholder in this. One other remark regarding the RPKI, not that it is the silver bullet here, but when you look at LACNIC, they do have very good up take for RPKI actually and it was, I'm not sure if ‑‑ he actually suggested that if we had route objects authorised by the AS in our database that we could have them pending on until a ROA is created on the LACNIC side that confirms it. So, just putting it out there, that can also be an approach to deal with these things. I think we have a lot of options.

JOB SNIJDERS: I have closed the line. We will item ‑‑

AUDIENCE SPEAKER: I want to say I agree it's a big problem, it's been a problem for a long time and I'm happy to help for a long time.

JOB SNIJDERS: On the topic of problems, I think you still have a two minute presentation.

WILLIAM SYLVESTER: We'll keep this short. We have been talking in the mailing list a bit about some of the problems regarding legacy space in regards to updating objects. There is currently a problem where if I have INET NUM and I have different maintainers between the INET NUM and the routing objects and/or DNS objects, I have to rely on the maintainers for the routing objects. In this case, it's typically a broken process because as the INET NUM holder, I should be able to maintain my own resources. So the proposal is to enable an ability for an INET NUM holder to directly modify their route and DNS objects.

And so from that perspective, I know we have talked about this back in Amsterdam a little bit, but it is a real problem that we continue to exist in. And there is 35,000 legacy networks, INET NUMs that are out there, and we'd like to come up with some sort of solution. The challenge is that if you're not legacy space, you already have the ability to do this and we're really just saying let's be fair and equitable in the legacy space and enable something like web updates to provide an ability to manage the directly related resources.

Any questions?

AUDIENCE SPEAKER: Hi, Ingrid from RIPE NCC, Registration Services. I just wanted to comment on an e‑mail that we sent a few weeks ago, in response to your e‑mail on the mailing list, and I think some things there were misunderstood regarding that e‑mail.

What our point was that we wanted to make is that not that our work load would not allow us to help with updating objects, but, rather, that the question on who is the holder of all these objects is not in our mandate as described in the Impact Analysis, and Athina will say a bit more about that, but...

We also think that by changing the process, the work load might actually increase. So, for the RIPE NCC to become involved once there's a conflict might create actually more work load than when the RIPE NCC tells the holders of the objects, sort it out, agree on what it is. If then there is still an issue with regards to passwords, maintainers, whatever, we can help. But ‑‑

WILLIAM SYLVESTER: So in his e‑mail he was referencing the reclaim functionality and I think the reclaim functionality is actually more of like a nuclear option in regards to this because we are really only specifically talking about operational aspects that are necessary for the use of the number block. In the case where there would be future disputes, I think that that would be something that would be handled by a Dispute Resolution Policy that probably already existed. While I think there is a benefit to decreasing work load, that wasn't the intent of the proposal. It was mostly that currently, if I don't have an ability to have control over all the maintainers, let's say, for example, 20 years ago I had a block that was routed by an ISP, that ISP is now out of business or I haven't had a business relationship in 15 years, they may still maintain the maintainer on that object; and I am saying, hey, I want to update that because I have a new service provider and I'm currently being blocked by the software to do that. As a legacy holder I should have control over my space and I should think the RIPE NCC doesn't necessarily need to be involved. If I have control over my INET NUM, then that would indicate I have control over my legacy resource.

AUDIENCE SPEAKER: I think point is here more like at which point it's decided who is the legitimate holder of the objects and the related objects. I think the moment that matter has been resolved and we have been handling these cases, the RIPE NCC can help sorting out maintainer issues, but that's a question of holdership is the first thing.

LUISA VILLA: Holdership should already have been established by having the maintainer in the INET NUM.

AUDIENCE SPEAKER: It's legacy. There is a lot of status quo that happened there in the past, and I actually will hand over to Athina now to give a bit more info on the effects of the proposed changes.

ATHINA FRAGOULI: So, when it comes to resources distributed by the RIPE NCC, we are very certain of the hierarchy of holdership. It's based on the contractual relationships; we're aware of no problem there. When it comes to legacy resources, we're not certain of the holders in contractual relationships that may or may not be there.

WILLIAM SYLVESTER: Why would RIPE NCC need to be involved in the relationships ‑‑

ATHINA FRAGKOULI: We don't know what are these relationships and therefore we do not know based on this relationship's hierarchy.

Now, I am wondering on which basis exactly are we supposed to assume this general by default hierarchy here in this legacy resources without conducting our due diligence and this hierarchy is always, when it comes to legacy resources we check that on a case by case basis, so we really need to conduct some due diligence checks there.

So if we assume this hierarchy, we may affect rights of people that have resources that are not covered by ripe policy. So, this gets a little bit trickier, we'll have to be ‑‑

WILLIAM SYLVESTER: If I have the maintainer in the route object, how would it be any different?

ATHINA FRAGKOULI: Let me finish my thoughts. So we affect the rights of these people. We may cause irreversible effects on them, and these effects might be disproportionate to what we are trying to solve here.

And later on, legally speaking, we will be held liable for any damages these people will claim that they have suffered from this changes we made.

So, from legal ‑‑

WILLIAM SYLVESTER: I was going to say if we establish some sort of inter pleader type relationship where RIPE NCC just defer to a third party, if there's any disputes you have a universal dispute resolution that then RIPE NCC just does what the outcome of that dispute would be, that takes the liability out of the case for RIPE NCC.

ATHINA FRAGKOULI: It doesn't; it doesn't. Because we have handled ‑‑

WILLIAM SYLVESTER: That's the format that's been established through this whole industry for 20 years.

ATHINA FRAGKOULI: We have ‑‑ we are going to make an action that will cause irreversible effects to people that are not part of our usual hierarchy, and we do not know what is the hierarchy at hand there.

And I'm also wondering, I mean, yes, I can see that we can be held liable on that, and also I am wondering how responsible that is for our responsible registry to take ‑‑ to make such activities without making proper due diligence checks and without ‑‑ I'm wondering what is the basis ‑‑ where are we going to base this activity? That's my point.

WILLIAM SYLVESTER: Well, in the case of I'm a legacy holder I would think that I should have full control over my space. RIPE NCC, from what I'm hearing you say, are saying that you want to be in the business of authenticating legacy holders. And you know, from my understanding, the policy said that that wasn't the case. So, I'm a little confused as to the position that you're taking and I don't think that it's reckless to enable legacy holder to have control over their own block.

ATHINA FRAGKOULI: Yes, that's not the intention. However, we are requested to make changes in the registry, and yes, we will be held liable for anything ‑‑

WILLIAM SYLVESTER: This is actually intended to take RIPE NCC out of those changes so therefore the party making the changes would assume the liability ‑‑

JOB SNIJDERS: Gentlemen... I am terribly sorry ‑‑

AUDIENCE SPEAKER: They are called women or ladies or...

WILLIAM SYLVESTER: We can take this off line. I'll be happy to ‑‑

JOB SNIJDERS: I will force you to take it off line. I really appreciate your thoughtful insight from NCC's perspective and I very much appreciate that you are helping us in this regard, so please continue your conversation.

We are 20 minutes into our coffee break, so, I'm sorry, but I have to kill the session.

However, for next RIPE meeting, I think I will discuss with the programme organisers maybe whether we should have two slots instead of one, and stop running into the breaks all the time.

AUDIENCE SPEAKER: I'm sorry to interrupt, but I also have a request to the chairs and it's one that I made to Gert as well, that for items like this, to instead of having feedback from the RIPE NCC registry services or update from the mailing list, to actually put the topics that will be discussed so you know whether or not that topic is relevant to you, so that would be very helpful if we could do that. Thank you.

JOB SNIJDERS: Thank you for your advice.

Ladies and gentlemen, I declare an abrupt end to this session. Thank you for coming. Thank you for your feedback. See you in Copenhagen.
(Coffee break)