Archives

PETER KOCH: We are running late 22 seconds already. I believe everybody in the mood to continue the session is already in. And since it's about DNS we are doing that with closed doors. So, welcome back. Most of you. Well, welcome back everybody but...

So we are going to have three presentation ‑‑ four presentations, actually, which letter is this missing, we have H, I, J and K. Jaap is going to talk about KSK roll over, we are going to have some administrative matter to settle. Before I give the stage to, I sense there is a formal intervention by Mr. ‑‑

BENNO OVEREINDER: After discussions during the break I want to mention indeed is strict but with 1.55 release from 6th of October there is ‑‑ we have changed default configuration downgrade off, so it's now lenient to algorithm roll I don't ever from 1.55 onwards by default.

PETER KOCH: So that was in response to the algorithm roll‑over presentation.

BENNO OVEREINDER: To the announced presentation.

PETER KOCH: Thank you.

XAVIER GORJON: Thank you. Welcome everyone to the last session of DNSSEC. This might take somebody by surprise because I have two names and that is apparently not really common. Introducing myself, I was a former student at the University of Amsterdam. This is part of my final thesis that I did at NLnet Labs.

The first point that I want to make is the motivation for this project, was as you may be know, there is a couple of events recently that have been in the recent history where usually sur ‑‑ service providers are being blamed for stuff it's not their fault because they are trying to implement DNSSEC and this is something it's quite a problem because you get to the point where a service provider is trying to implement DNSSEC and they are having the problem that the clients are blaming them when the problem is from the service at the other end, well, this is going really bad.

To measure ‑‑ so the primary objective is to the measure the current state of DNSSEC deployment from many points of view and in the most of the focus of this project is trying to solve these issues without actually trying to change the current infrastructure.

The tools are used for this was Python scripts to gather information, classes that were provided by NLnet Labs, it was my task doing this as it was quite a short project, only lasted for one month, and the most important part was using the RIPE probes, I think everyone here is familiar with them.

After with the results I tried to gather was basic DNS queries using this pros, and it was not a surprise, most probes were able to get all the, really all of them, over 95%, could I say over 99% were able to get proper DNS queries resolved. However if you start trying to query the probes for DNSSEC information, just the bakes DNSSEC query, was already just showing only a 63% success rate. Now, 64% seems quite a reasonable percentage for this kind of things, but if we try to then check for non‑existent domains, this went down to about 56% of the queries that were properly resolved. I must say that these two values seem to be related but I was querying different serves for this so I am not sure if there is a relation between those two values, at the NSEC I was querying at root domains and NSEC3 testing I was querying the dot NL domains. And things got even worse when I was trying to query for wild card domains, the proofs were only successful in 40% of the time and I didn't even bother including the table here for the different resource records that were returned because it was quite a mess so the one that was interesting, it was only 40% of them were properly done.

From the conclusions from this experiment it seems that the words that we get worst results the harder is the query. My problem it's quite difficult from this, only these experiments with the probes to really guess with the, where is the problem. And we tried to together with the idea of this we tried to run again those probes, querying the ISP resolvers instead sent by defaults probes. And how I did this:

Basically, usually the probes at the home, from the users with RIPE Atlas probes were querying their home routers or another and those DNS queries were being forwarded through potentially many DNS forwarders, until they got to the ISP resolver from the ISP. And our guess was that maybe since some of the, especially the home router hardware tends to be cheap hardware, we guess that maybe there was a problem with the information on this part of the transaction and not the on the part of the ISP.

To do this, what we did was use DNS server that we had at NL labs and what we did was query that server for a certain domain name and that server, instead of returning an IP for that domain, what was doing is returning back the IP that he was receiving the query from the ISP resolver.

Most of the probes were able to find this, the ISPs resolvers, a small percentage didn't manage to do so and we believe because the ISP was kind of blocking this and the question was, well, is this actually affecting the result?

And as you can see from the result, most of the different queries were actually being increased by about 20 points in all cases so actually, we can confirm that ‑‑ we cannot affirm from ISP to the home routers some information was being lost. This is kind of the conclusion that ‑‑ querying the ISP resolver was apparently a way to improve the success rate, and again, that hardware was pretty much that the end routers were being pretty much the issue here.

As a part of the research, also we tried to do was query with the DNS that is check for checking disable to try to get how many reserveers that were resolving DNS queries were also validating the result of them. This was something that was completely outside of the scope of the initial project but since we had an time for it we could manage to also to do this and what I got from this was that only 26% of the servers were actually validating the data that were receiving. And so, moving on to what was the initial purpose of the research, defining discovery method for the resolvers, the first scenario is just use the default DNS that is provided with the probes or in the case that would be like the home PC. In that case, in the best case of scenario as we said before, 65% of the queries could be made properly but if you need NSEC records or wild card domains then that drops down to 55% or 40% of the successful queries. The first backup that we can use for this is using the, tried to use the SP server for this and considerable increase of the amount of preparance ‑‑ roll back from this method, could we using public DNS servers that is something that did ‑‑ not pretty in a way because of course some people are more concerned about the information that you will give to your DNS resolvers. And the last result possibilities is to use your machine as a stub resolver which is kind of bad then you lose all the information, well all the systems from DNS.

As a conclusions from this project, the first one I think it's pretty noticeable, is that it's kind of difficult to implement DNSSEC because ‑‑ it's kind of difficult to implement DNSSEC because the end users don't see any kind of better service from them, they don't get better Internet speed or they don't get more features from them so it's something that people don't tend to care until things go wrong. And also, another conclusion from this is even though we managed to scope down where the problems could be, it was kind of difficult to actually determine the exit case of those errors. And from this I will move on to questions. Before questions, I really wanted to thank NLnet Labs for letting me here today and for the RIPE community because we at some point we were talking with Willem that we needed this experiment and the ‑‑ without ‑‑ it will be something that will take two or three months to do but actually the RIPE community managed to do it in two weeks only for us so I wanted to thank you for that. Thank you.

PETER KOCH: Any questions?

AUDIENCE SPEAKER: Can you explain the method did you for finding the ISP resolvers because it might actually be wrong. And explaining ‑‑ so you had an authoritative server, right?

XAVIER GORJON: Yes.

AUDIENCE SPEAKER: And you issued a query to the ‑‑ over this formal forwarding chain, yes?

XAVIER GORJON: Yes.

AUDIENCE SPEAKER: And the authorative server got an IP address. And that is what he used for the IP address of the ISP?

XAVIER GORJON: Yes.

AUDIENCE SPEAKER: Well, the problem that only works if the receiving IP address is the exact same that the ISP used for the outgoing address. Now, a lot of kind of ISPs have stored their resolvers Anycast behind load pallor and you will get the actual Unicast address of the resolver and not the kind of real address that the ISPs give out, that is also might explain why you have some errors because it's different address that you use to go out to the Internet and the address that you serve your clients with.

XAVIER GORJON: Yes, well, the thing is, apparently this was working and we get ‑‑ we were getting different results by doing this. So actually, it was kind of, at this point since we didn't have any contacts with the different ISPs and any ‑‑ wasn't ready in place ‑‑

AUDIENCE SPEAKER: They probably just look at the client address and not the actual source interface it comes on but it's usually you address you get from the ISP might be different you see there.

XAVIER GORJON: OK, that will be something to look at.

PETER KOCH: More questions? We have some time. OK. Then, thanks again
(Applause)

By the way, the reminder that we gave in the previous session, you can rate the talks and you can, I guess, still vote for the Programme Committee and can find other things on the RIPE meeting website, do that in the break. And let's listen to Willem who is going to talk about DNSSEC for legacy applications.

WILLEM TOOROP: Yes. So this is a presentation about a project which we developed switch module that employs get DNS to replace the system stub resolver and do DNSSEC validation at the application level. And all sorts of reasons to do DNSSEC validation at the application level, maybe the most important one signalling of DNSSEC failure.

So, how did this project come into existence? Well, I have been in programming on the get DNS library, which is a library ‑‑ DNS API specification for resolving designed by and for application developers. It originated at the IETF. And we did implementation of the API together with, in collaboration with, and labs and software and a few more people.

So, get DNS, the motivation for wanting a resolving library is applications need a better handle on DNS. Developers wanted to do faster DNS processing, asynchronous, multiple queries at once and get the results. They also wanted to look up other records than A and AAAA records and just addresses, and for example, secure shell fingerprints or DANE records, in case of those DANE records, the application also needs to know the DNSSEC status for them, because if it's not secure you cannot use it. And, but also very nice to have as application is to know if there was a DNSSEC failure, because currently, if DNSSEC goes bogus this is appears as a network failure. And you blame the ISP or the owner of the website, and not the domain that is bogus.

So, that is all good. But get DNS also has many features that does not need to exit the application at all, for example, the new transport features, privacy for DNS, DNS over TLS. Get DNS can do stop DNSSEC validation, so it will employ the caching properties that DNSSEC recursive resolvers in a network provide, and the whole DNS Eco system will benefit from this. The application will get answers faster, the authorities less loads, though we learned from the presentation from Xavier Gorjon you do need DNS recursive resolver for that, needs to provide you with enough information to be able to do DNSSEC validation at step level.

So, another feature of get DNS since version 0.5.1 is roadblock avoidance. This is a picture that Xavier Gorjon took at the palace of parliament right here next door. So from Xavier Gorjon's presentation, the recursive resolvers available to the Atlas probes at least, 64% gave enough information to let us step resolver validate positive DNSSEC answers, so answer that does exist but only 56% can prove non‑existing names and even less wild cards. So, roadblock avoidance is a draft developed at IETF and it has ‑‑ it is extensively list of everything that can go wrong, trying to get DNSSEC information from available up‑streams. It's very detailed. For example, it also suggests that if you know that a parent zone insecure, that there is a proven nonexistence for certain DS record, then you don't need to do the DNSSEC validation again for a query which is underneath that zone because you already know it, it's insecure, etc.. so, the current version of roadblock avoidance in get DNS does not have the complete draft implemented actually; it is a very minimal passive implementation of roadblock avoidance, which we will just try to do the DNSSEC validation as a step and if it gets as a result gets a bogus answer, we try in full recursion. So trying the ‑‑ to see if what is received via the available upstream is the same as it would get by exiting the authoritatives directly.

Yes. So, one thing I also need to mention, this is what makes this different from DNSSEC trigger is that this will be done on a per query basis so if your upstream is perfectly handling positive DNSSEC answers, then you are good, right? If you already know that it's going to be insecure because you know that a parent is insecure, you are also fine, you can just do the query with the available upstream and don't necessarily need to fall back to full recursion.

So, two‑and‑a‑half months ago, I noticed this advice interest the US computer he emergency readiness team advising that premises firewalls should block out ‑ ports 53 so the US Government is suggesting another roadblock for this story. So, at the bottom of the page, there is a questionnaire asking of whether this document was helpful for you. I would appreciate if you would say "no" there.

So, many features don't need actually need application interface with the get DNS resolver. Linux and UNIX systems provide a DNS library, which is accessed via the get name info functions. Those libraries are typically don't offer DNSSEC or any other modern DNS capabilities. So, we thought, you know, what ‑‑ maybe it's a nice idea if it could try to replace the available systems step resolver with get DNS, and when we have idea like that we usually hire a student, so he hosted this project, the assignment was to explore ways how to provide alternative for the system stub resolver at modern DNS capabilities such as security and privacy and evaluate different options. The result of the project was this is in a switch module and offered as OpenSource on Git hub, all the other experiments are also in this repository. It works on Linux systems or systems using NS switch, I believe it's also ‑‑ it does not work with Google Chrome or chromium because they provide ‑‑ they have their own resolver built in. There is also ‑‑ so NS switch is the name server switch on those systems. It is a mechanism to have different ways to resolve names.

So this is how you turn it on. There is the ET CN S switch.com file which normally has DNS listed somewhere. If you replace ‑‑ once you have compiled and installed the ‑‑ switch module, you have to replace the DNS part with get DNS. But there was an issue which emerged quite early in the project, and that is that many of those new stub resolver features have actually state like, the stateful transport, the cache with full recurs, the up stream capabilities taking ‑‑ can this upstream do positive DNSSEC answers, etc.. all the state is embedded in the get DNS context with get DNS. So created a separate process that would have this get DNS context and schedule all requests ‑‑ all processors using the NS switch module, via this context proxy. But you need to run a Daemon for that so you have to run the get DNS Daemon as well. The repository contains many other possibilities to not use a Daemon, there is configure with disable Daemon only mode, the first process that will try to do a query will start a Daemon process and do resolve on behalf of other processes. You can create a context per process, etc., but those are all not recommended using the get DNS Daemon works best.

So, the context proxy also runs a little web server, which allows you to configure how the NS switch module resolves. You could ‑‑ you can either choose to do DNSSEC validation or validate DNSSEC and only accept secure answers. I don't know why you would do that. And also, roadblock avoidance, the secure transport options and the login level. There is a global file in ETC and user level config file in home directory.

So the module also has input signalling of DNSSEC failure, and this is a bit of a dirty ‑‑ if the answer is bogus it rewrites the address in the packets to the local host and the http server on the local host then receives what name you tried to connect to in the host's parameter of the http header and you can say, oh, sorry, you could not securely resolve this and that and that because it was a bogus answer. So, I realised this is not a good way to deal with this in reality, and better approach might be desktop notifications but it's just a proof of concept and just to illustrate that it is possible to signal about DNSSEC failure at this level very properly. It would also be nice if the signalling would offer a possibility to add a negative Trust Anchor for the domain, right, if it's not your bank maybe you want to look at it anyway.

So, that is it. The NS switch module is a DNSSEC‑capable alternative to the system stub, thus secure and provide name resolutions without you having to do anything for it within the application, and it avoid DNSSEC road blocks, customise system and user level and DNSSEC failure signalling. So, this was an exploration effort of these sort of ‑‑ of this modus operandi for DNSSEC resolving. So we don't recommend you to use this module in production, especially since it contains all the different ways to do this, but, you know, if you want to have ‑‑ Rye it out you can do that. That's it. Questions?

AUDIENCE SPEAKER: Hi, from SIDN. As it happens, we are currently running a pilot with SIDN that has a hack he willed and bound that does exactly what you just mentioned. So it returns a fake answer and points you to a page where you can send an NTA. So we need to talk. And I also have a question: Are you aware that this week there has been a rather large discussion on the GLIPC mailing list about DNSSEC resolution?

WILLEM TOOROP: Yes. We are talking with the ‑‑

AUDIENCE SPEAKER: Showed on this work that might help. Thank you.

PETER KOCH: For the benefit of the rest of the room could you give a two sentence summary of that discussion.

AUDIENCE SPEAKER: It's difficult.

PETER KOCH: That was only one sentence.

AUDIENCE SPEAKER: I discovered it because reading this new site about Linux and OpenSource and it had a summary right there and that was already two pages. But I think most of the discussion was how to get the configuration right because you cannot trust resolve.comes for some reason, one of the points you might use NSS or maybe NSCD but I don't know if that has been researched as well. But difficult was the conclusion so far.

AUDIENCE SPEAKER: DNS.BT, what happens when it fails for other things than http?

WILLEM TOOROP: OK, so the NS switch modules works as a shared library that is loaded by the process that does the request, so you actually know which process is asking for it, so it checks for the process name ‑‑ it's a dirty heck, right ‑‑ if it says Firefox then it rewrites the address and else it does not.

AUDIENCE SPEAKER: In https?

WILLEM TOOROP: No, no ‑‑

AUDIENCE SPEAKER: Hi, Tim Armstrong. Not a question, just a comment. Really happy to see that there is going to be more and more work on making end users that are typically not the people in this room, aware of DNSSEC and its importance in, well, the future of Internet security. Thanks.

WILLEM TOOROP: Thank you.

AUDIENCE SPEAKER: Warren Kamari. Myself and somebody, I think Evan Hunt, have a draft which is going to allow recursive servers to signal back to stubs for additional error information. I don't remember what the status of it is because I can't remember who I wrote it with, but that will sort of at least allow recursives to be able to tell stubs this failed because of a DNSSEC issue or something. And my actual question was, how dirty did you feel with some of the stuff you have been doing?

WILLEM TOOROP: I love it. I was actually thinking about a yobber proxy that would interfere ‑‑ trying to connect to bogus other site over yobber and I am sorry, it is your yobber bogus operator, what do you want me to do?

PETER KOCH: Speaking about dirty feelings, I have one more question. But don't be afraid.

So you mentioned this nice guidance from US cert. How many of you have been aware of that before? How many of you have read that in the meantime? OK. Cool.

AUDIENCE SPEAKER: Can we go back to that?

WILLEM TOOROP: There I was.

PETER KOCH: Anybody feel entitled to send comments there? If you read through it, it's probably not that bad, it's about enterprise set‑ups and so on, but members of the group ‑‑ maybe want to have a look at that so that the guidance doesn't spread too far without qualification.

JAAP AKKERHUIS: Pointed Willem to this and if you call to that ‑‑ refers to a document and then it refers to another document and that says really you do not need your own DNS because really should be and goes to another document and in the end somewhere it mentions, it would actually be good if the central whatever DNS service would run DNSSEC but then you are four layers deep so it really is only just scares the hell out of anybody who doesn't know about DNS or for the audit it is up, cheque ‑‑

PETER KOCH: If your enterprise ‑‑ if your primary goal is to protect against data exfiltration, that advice might be more appropriate. Just trying to avoid that pops up everywhere. No distraction to your presentation. Any more questions or comments? Then I guess we are right on time back, and thank you, Willem.
(Applause)

And I think Jan is coming to the stage giving us some insight into geolocation, split horizons and the DNS of course.

JAN VCELÁK: Thank you for interaction. I work for cz.nic as Knot DNS developer. And quite recently we had some requests from our users that they are interested in geolocation. So this is the reason why we started looking into this topic. And because we are new to geolocation we started by like mapping the existing solutions and trying to find out what are the existing approaches, so that we could pick the best stuff of everything and build our own solution.

So, there are definitely some challenges in implementing geographic split‑horizon. One of these is mapping of resources to locations, this means that if you are, I don't know, some content provider you have multiple data centres across the world, so this means how to map the physical location of the server to the physical location of the client. It can be static and can reflect the real distance of the physical places. There are multiple approaches.

As split‑horizon is possible at multiple levels, you can do it at zone level, at resource record level, it depends. Also, we are interested in multiple servers deployment, which means that some solutions store the configuration for geographic information within the zone, some implementations have separate, so at the moment we have nice mechanism for transferring zone content, which is own transfers, and if you like place some part of the configuration separately you will have to find some different channel to move this, that data or replicate this data to the other servers.

We are also interested in compliance with global resolvers because, well, the physical location of the client doesn't have to be the same location as the resolver location, which is doing the resolution instead of the client, and another challenge is DNSSEC, of course DNSSEC signing. So, I would like to show you what are the ‑‑ what existing implementations do, and before I do that, I will just show you a ‑‑ quickly explain you one DNSSEC ‑‑ EDNS subnet. It's still a draft and this is basically something which makes the global resolvers infrastructure like work with the geographic split‑horizon. So the extension is not difficult at all. Basically, what is the purpose of this extension is that the resolver can sent to the server what is the address of the client or from which network the client or ‑‑ into which network the client belongs. It supports IPv4 and IPv6 basically, the client just sets the address field to the prefix of the address and sets the length of the prefix. The scope prefix is increase set of 0 and it's intended for authoritative server, so when it receives such a DNS query, it can indicate using the scope for which prefix the response is valid, so easiest way is to copy the prefix to the scope line which means that the response should be cached for all the clients querying from the same network, but this can be shorter which means that even for some clients from, I don't know ‑‑ with ‑‑ from larger network, this response is valid as well.

And this is supported by a lot of non‑Anycasted DNS providers, and it's also supported by a lot of global resolvers, like Google resolvers or open DNS, they are definitely much more.

What are the existing implementations? OpenSource implementations, I am pretty sure that there are some commercial ones which do the same. But they are ‑‑ geoIP DNS, GeoDNS, PowerDNS and Knot DNS and we have experimental support, it's not part of the mainline but I will get to it later.

So, how it works: GeoIP DNS is basically a ‑‑ supported kind of geographical location from the beginning, the geoIP DNS just improves the algorithms, because is a bit different in a zone format, it can specifically ‑‑ or in a zone file of BD DNS you can specify some address prefixes and map them to names which can be locations and then you can set actually for which source locations the responses are valid. So you can do a simple geolocation even with BDP DNS and this implementation just extends the algorithms because the geoIP databases are usually very large.

I haven't mentioned that this obviously doesn't support DNSSEC, and EDNS client subset.

There is also a patch 4 BIND, unfortunately, it supports only IPv4 and it adds support for views or doing geographic split‑horizon at the level of zones or ‑‑ yeah, at zone level. You can basically set client countries for which the view is valid, and then there is ‑ you can use the NA, matching key words also, you have some fall back. GeoDNS doesn't support EDNS‑Client‑Subnet, so it really depends on the clients' source IP address and there is a limited support for DNSSEC because obviously you can assign these zones because two separate files, you have to be extremely careful to have the same key set in all these files and so on, but it's definitely possible.

PowerDNS, recently added geoIP back end which works pretty fine. There is no support for EDNS‑Client‑Subnet but for DNSSEC again. They don't use usual zone format, have specification of the zone content. There is a separation between ‑‑ or in this format called records and something called services, the records are usually DNS records as we can see in regular zone and then there are these services which can be used to do the ‑‑ to mapping to some geographic location. So for example, if you have some location in Czech Republic and European Union, you define this location as usual record and then define a service pattern which replaces based on the client's location the parts of the domain name. And PowerDNS synthesizes DNS record, would point from in this case, dubdubdub.example.com to the cz EU and so on. Which is fine. And for the DNSSEC it uses the front signing, which is basically signing on the fly and these records are signed separate, send the records in the zone.

Yeah, one ‑‑ probably the most advanced implementation is G D N ‑‑ it's a project which is developed by foundation ‑‑ using these servers, accept except ‑‑ it supports geoIP plug‑ins and also supports quite advanced monitoring of data centres, so for example, you can set http monitoring of some location and all the plug‑ins can depend on that so some data centres might not be visible at some moment when they are under maintenance so the geoIP plug in can work with this so, for example, temporarily hide some locations and use locations for failover or whatever. Anyway, they use this concept of maps, basically you can set ‑‑ the maps define some data centre names and map them to some physical location. For example, here we have a two data centres, one for Europe and second one for US and then some failover location. And we can use the geographic data database base to map the addresses from EU ‑‑ European region from two data centres and North American to other two regions, and then there is a map of resources which maps the individual addresses to the data centres. They use regular zone file format, and in this they have the special type of record which just starts the ‑‑ or starts the job IP translation, so it synthesizes the record based on this mapping but can also be done automatically on physical location of the servers based on the address of the server and address of the client without...

So, this is how we did it in Knot DNS. We have prototype, it's in module IP branch in our repository. We support EDNS‑Client‑Subnet and DNSSEC the same way how DNS does it, we call it on‑line signing, not front signing. There is a difference because we started configuration directly in the zone file which asks to transfer to ‑ servers. At the moment we don't have like implementation in our zone parser so the configuration is encoded in binary and basically, this means that if you ask for some time, which doesn't exist and there is this special type it of record, we start processing the geoIP, there is a string which encodes the pattern, in this case if I ask for dub dub dub dot origin, I will get translation to dub dub dub dot country name. And if the country name is not found in the zone file then there is a fail‑over to EN so. This is how it works with Knot DNS. I am probably running out of time. But, yeah, I just wanted to mention a few mechanisms how to actually do the ‑‑ address lookup, IP address lookup or getting the geographic location from IP address. There are a bunch of libraries for that, probably the most popular one is LibGeoIP which is GPL licence but it's really old, it's terrible. New one is Libmasmindddb, it's public licence, even less restrictive, which is great. And it has really nice API and easy to use, if you want to use this library, the database, there is again a lot of them; some of them are free and for some you have to pay if you are using it for commercial purpose. Probably again the easiest one or the most popular one is the geolight library which is licensed under commands with attribution something.

And yeah, this is the last slide where I want to show how it's easy to look up allocation based on the address using this MaxMind DB, you open the database which is just one line of the code, then if you have address in some address structure which you probably have if with you work in sockets, you can directly look up the address and you will get entry to the database and contains some structured data,or one address get information about the country or the region or continent or city, you get the information in a lot of locals, so this example shows how to retrieve country's code from the looked up entry and print it out.

So, thank you for your attention. And hopefully you learn something new.

PETER KOCH: Thank you.

ANAND BUDDHDEV: Hi, this is Anand from the RIPE NCC. I have a comment via IRC from Peter van dyke of PowerDNS, and he says that PowerDNS is geoIP back end most definitely supports EDNS clients subnet. However, he also goes on to note that the documentation could do with some improvement.

PETER KOCH: We think for that commitment, I guess. Any more questions?

JAN VCELÁK: Implementation is quite new, there has been another order beckoned, which I think was called geoIP and was working not that well as this one and in the last version I found only geoIPbacked‑‑ I think the whole plug in was deprecated but this one works real well.

AUDIENCE SPEAKER: Vicky from ISC. I have another comment. As of the 910 version of bind we have geoIP support. It was originally came from a patch from Ken Brownfill, but it's been integrated and that uses our ACL feature in BIND. We also have a an implementation of the EDNS‑Client‑Subnet ID on the authoritative side that we did last fall that origniated with a patch but that isn't released yet, that will be a 9‑11 version but somebody that is interested in using it the code is already out in our open get.

JAN VCELÁK: Thank you, I didn't know that. I missed that.

PETER KOCH: More questions? I have one tiny question, then: You mentioned for one implementation that it was v4 only. What about the rest and what about the databases that you mentioned?

JAN VCELÁK: Yeah, wait a second. OK, the MaxMind company which actually now leads development of MaxMind and geoIP is also doing this geolight library and they released the content of the library in the old geoIP database which is two separate files, IPv4, and IPv6. The new one of supporting the Lib MaxMind DB contains this whole map in one space. I think they are using the IPv4 to IPv6 space mappings so it's in one tree. And they also release the data in CBS. As for the other libraries, I am definitely sure they are just IPv4.

PETER KOCH: Thanks for that clarification. So nobody ‑‑ I see nobody rushing to microphone.

PETER KOCH: This will bring us to the final presentation today before going to the administrative issues again, Jaap will talk about the KSK rollover team, and I guess we have at least one other member of the team in the room; is that correct? That is Geoff in the back. OK.

JAAP AKKERHUIS: But to set the record straight, this is not really a presentation from KSK exactly from at loose and ‑‑ I am actually here channelling Roy Arends who was supposed to channel, probably sitting behind ‑‑ see how he put his slides. This is collective thing. Next slide it says.

As people know, preparing to the root zone to KSK so this is how things are happening from ICANN and IANA and ICANN does management of root zone KSK as part of fulfilling the IANA contract done by the NTIA and together with the two other root zone management partners which is Verisign the root zone and maintainer and NTIA itself. As everybody knows and has heard yesterday, this is ‑‑ it's all subject to change in the IANA ‑‑ but we don't go into that in this ‑‑ it is just how things are on the moment. And the root zone of KSK is DNSSEC Trust Anchor. If you want to trust DNSSEC at least what is ‑‑ starting from the root, IANA root and don't want to have your own ‑‑ trust, this is the big daddy where ‑‑

This actually, there is some ‑‑ the root zone KSK roll‑over it was explained how this worked together with the design committee and the design committee is a team of seven volunteers which actually along with ICANN and NTIA and Verisign are looking what are the issues to be done with to go the roll‑over and? One of the things really central to that discussion is RFC 5011, the rolling‑over and so this talk only goes more or less about the 5011. And the external volunteers of the community is Joe, John Dickinson, Geoff, Ondrej and Paul and Yoshida, I was the first on the list, I was picked to channel.

Well, what are the stages of the plans? Not a lot different from 1970 ‑‑ the plans not finalised. They are actually set of actions are being analysed and as people know, public comment on the IANA ICANN website, about people could send in comments about the things and some comments have been in but not really being worked on. And also, it means that the consensus hasn't been reached at all. And furthermore, what this is going to deliver is ‑‑ make a real plan out of it. And but one thing is really become very clear, and that is that RFC 5011 will actually play a big role so that is why we zoom in to this stuff for the time being. And here again. What is ‑‑ what actually is it and I mean, how do we deal with it and it has some philosophy behind it or the spirit of the protocol calls it, and what do you do when you don't follow 5011. And also there is some ‑‑ a bit what ICANN might be planning to do.

And RFC 5011 widely quoted automated updates of DNS security, DNSSEC, Trust Anchors, it's a protocol ‑‑ old IFC, and its purpose as standards in January 201 and for those people who don't know how to look up IFCs, here is the URL, it's according to all the IETF things.

And what is that? It describes ‑‑ this is what the abstract says this document describes a means for automated authenticated and authorised updating of the DNSSEC Trust Anchors. In other words, the real things trust ‑‑ and based on that other anchors might be added and placed in the hierarchy and you might be able to replace the existing anchors.

So that is basically what the it documents and ‑‑ and so, but it is actually an interesting, quite some things which are kind of loosely defined in here, and it tells you how to add a Trust Anchor, how to add a Trust Anchor and you have to try ‑‑ basically said add a new record, sign with all available KSKs, if you have more than one. And after 30 days that you saw the new thing, assume it's trusted. Which is also known as the whole downtime, so you actually try for 30, at least 30 days you are not supposed to trust this thing. And the reason for it is actually the kind of documented in the IFC but it is to prevent some kind of attacks although it's not 100% proof. And so if you within this ‑‑ if this disappears and forget that it was even being there, so it's not there, it never happened. And so that is the problem with the 80 people had when they were playing with the roll‑overs, they wanted to do it every 14 days and noticed it didn't work. Well, it's because of the 30 days, probably implement stuff.

Anyway, back to the main subject. When it is trust it stays trusted until it is revoked. Note the small detail. It is ‑‑ it's still trusted, so if ‑‑ if it disappeared for one or another reason it's still trusted but you cannot use it pause you cannot verify the whole change with it but it's still there. And so the trust is there but so the lights are on but nobody at home.

The basic idea is that the whole idea behind 5011 is that if you have a trusted situation you can actually change the Trust Anchor using the old anchor and going forward with the new one and if so, if ‑‑ at the ‑‑ you in this whole ‑‑ downtime, people are supposed to notice that someone is trying to fake stuff and so on, and so the operator can be warned it's something fishy is going on and that why shouldn't trust this thing in the first place until 30 days.

And what actually ‑‑ so 5011, this is an interesting twist, if you read this carefully, it states what the various states of the keys are, whether it's trusted or in all down period, whether revoked and but it doesn't really describes complete protocol of the roll‑over, but the definitions ‑‑ be the detail, so how you exactly do the roll‑overs, it doesn't really state, but you can ‑‑ but the stays ‑‑ to do the roll‑overs. And so, there is quite some examples with the cases in The States it can be and how it can be used and.

Well, what does support RFC 5011. There are quite some caching resolves which have ‑‑ implemented and tested 5011. Been done by the root roll‑over committee, that is on the resolver side. So, people using DNSSEC and so BIND does it, unbound does it, Microsoft name server does it, Nominum and so on. This is kind of interesting how you test these things because this 30 days roll‑over whole downtime is actually a bottleneck because if you want to do this, you have to ‑‑ have at least a couple of months buffer test roll‑over so there are some bug in bind and unbound to forget about the roll‑over all downtime, at least being able to specify it but this is really just for debugging aid and what we realise is that now the test thingy from ‑‑ actually, also, is able to do, speed up in time and that is actually better way of testing these things because you don't have to touch code and you can just use the server and the test as a black box. And on the producing side, well, that has been done ‑‑ interesting resolved as An`and already talked about in the RIPE side and there are no reports, at least not a lot of reports of disaster. Yes, dropped off when did something wrong but more humourers and most of the time that is what you see, people tonight realise about that if you do some states of the roll‑over or make a mistake or mistype the Trust Anchor or forget some part of the chain the DS record change but mostly humourers, not really that mature I am afraid to say. That is on the produce side and we could ‑‑ and if ‑‑ if ICANN/IANA is going to do it, it might be ‑‑ they might be careful enough mot to make a disaster out of it.

And well, one problem ‑‑ concern is, is how you manage actually the whole 5011 thing. Quite some ‑‑ 5011 is supposed to be fully automatically, and which is actually, if you think about it, kind of change from how things now work, and in the recursive resolvers, the configuration on the recursive resolver is completely done by the operator, if the ‑‑ change done now ‑‑ the S root changes, half of the world have to change ‑‑ not have to but few change its address ‑‑ it's file to reflect this change. Or wait for the next software update that the provider has actually put this in by default. So this kind of depending on how the operator chooses to follow the IFC. I mean, if he, for instance, creation things on read only file system you cannot really write down the new Trust Anchor, and think about the Internet of things. Interesting problem. And all the other CPEs ‑‑ are devices which does something with DNS and cannot be updated.

Now, that is basically the configuration of the resolver, it can be ‑‑ that is what it meant by current model of operating resolver. I mean, it's actually people do that as install and forget and you really, really do want kind of monitor the operator ‑‑ monitor the operations. Another thing is, if you are producer of the new KSK key, I mean, there is no way to tell whether or not it's a success. I mean, did everybody follow the new Trust Anchor or moved over suddenly the Internet stops working. And that is kind of omission in 5011, it's not really designed to be monitored from remote way, so it's a ‑‑ actually, as producer you completely in dark whether or not what you are doing. So, there is actually some kind of late attempt, some might say, so that try to come up with some ideas how to do remote verification so you can see that the ‑‑ has been a success. Draft out which looks at ‑‑ defines he can tended DNS ‑‑ key tag, something out trust management. It's always good to review these things to see whether it makes any sense. But, that is why the links are there and please send in your comments.

But the problem is, some things are not really going fast in the IETF, and it's not really sure whether ‑‑ the KSK roll is already somewhat late and whether people are willing to wait until one of these things has been implemented and really has been worked and tested and so on, that it might be a couple of years down the road, so the first KSK rale over will probably happen without of these.

So, the ‑‑ if you don't do 5011 don't follow the procedure which is there, what do you do? Of course, then you really have to do everything by hand. And so, you have to figure out what a new Trust Anchor is and without relying on automatic tools it means you have to pull it from somewhere and you have to do it in a safe way so you are not pulling the wrong one. Or the fake one or whatever and you also ‑‑ that is another part of the producer ‑‑ you have to see whether or not people have actually picked up the right one again and so there are some tools which actually implement RFCs outside 5011 level, outside the whole protocol suite, but still relies on the intended automations. And what ‑‑ another thing is since it's all kind of based on 5011 state, and still the operator is supposed to follow but ‑‑ this line by hand, using various tools into following the states by hand and it that really depends on the operator doing what ‑‑ doing all the configuration and all these things in‑house and back home, so other words, if you are not fully 5011 you are screwed. And but so ‑‑ but there are so many, many things you heed to do in 5011 and what I already mentioned, that some of the servers in the like the Internet of things, I mean, they might not be able to do self configure and follow 5011 at all, if these things are not being update they are guaranteed to fall off the Net. And will all this work? Yes, for certainly definition of "work" and so. The operator itself needs really to follow the states and either by hand or just follow the automatic protocol as been defined in 5011 because the states ‑‑ if states change there is an action you should do and a state list inside 5011 what to do in which stage, if you go back and forth.

So, again, another checks are really necessary. 5011 specifies how often you should poll whether or not the state of the parent changed and the state of the KSK and I guess a minimum protocol configuration and maximum, but if you rely on implementation stuff it's being done for you. The other thing is you have to really look at whole down times and, because so you should not rely on too early and you also should have a look at the default times because one of the stages that the ‑‑ it might be missing, what you also could say is getting it back again and then sign the whole stuff only with the new one so making sure that this key will never be touched and there is a period for how to do that as well. And that's what I ‑‑ I am kind of repeating myself because when Trust Anchor is missing it is not necessarily ‑‑ they can come back and actually something, one of the ideas, the design teams play with to use this feature of 5011. And it is ‑‑ that's what this picture basically says. I mean, introduce new key and be ‑‑ term out of the 5011. That band is the whole downtime 30 days, I mean, and then this thing is valid and then we might actually remove the old one and see where things still are working and but you can always bring it back when people don't touch the new one, things like that. And in the end you give folk the trusted one, the old trusted one and completely remove it, ban it from this planet and are that is actually the special part, that is Trust Anchors can go missing for a while so it can be ‑‑ ZSK roll over action and so trying to minimise the sizes of the packets and ‑‑ and to see whether or not we can keep the size of the packet small. And well, what can operators help more than just reread 5011 and trying to figure out if they want to to ‑‑ what they have to do, and there is the document in work how to describe sort of things and ‑‑ drafted out we already at number 12 there, how to get Trust Anchors outside of 5011 into your system. And this is ‑‑ that also can have snapshot of the Trust Anchors including the ones which are missing inside the DNS so the operator can have a look at second source outside 5011 protocol.

And well, so, if you don't trust 5011, I mean, go to other sources with Trust Anchors, that is basically what it says. And that's ‑‑ and in the end the trust is always what you believe not what other people tell you to believe and so that is for the people are worried that forced ‑‑ owned by ‑‑ to the evil ICANN and US Government and security ‑‑ you decide whether or not you trust anything. Well, a quick look in the future as far as it exists:

There is as already mentioned, it's ‑‑ plans are not final but yes, the base of the roll‑over will be 5011 so have another look to it and there will also be a big campaign hopefully to publicise that this is going to happen and you have to watch it and the other keys and if you don't trust 5011 there are other ways how you can ‑‑ other ways how you might be able to trust. And also, for the big operators, when things are going slowly, hopefully there will be an outreach programme so to minimise the, all the trouble tickets, phone calls, ISPs get when things are not going automatically as smooth as they are supposed to be.

And what ICANN really would like to know and for people who want to help, is to build contact list of people who should be warned that this operation is happening and also another way is to trying to find out what operators are actually trusting and so where they can publish these out of 5011 keys, is ‑‑ I mean, find sources how they can use that in the outreach programme and yes, the other thing is, when are people ready to roll the key? That is an interesting question. And well, there is some more information, this is mainly list, this is not really main list, that is where announcements will be made about everything happening for DNSSEC at ICANN and they also have ‑‑ the hash tag key roll‑over and at ICANN, I guess that is Facebook, things will be announced for roll‑over things. And yes, I am here at the end of the ‑‑

WARREN: Shocked that I am here. So, can you go back two slides. So I think a number of people are kind of grumpy this whole process and I should mention I am one of them and I am not grumpy at the design team so no need for you to be defensive yet. It says what will happen. I should point out there has already ban public consultation in 2013 and a bunch of documents that were published, like SAC 63 was one of them, I think you were an author o one of that and from what we can see nothing really happened after that, sort of, it doesn't seem as any other recommendations have been followed. We have seen a number of presentations very similar to this one at sort of operator groups. This one I think was somewhat longer, but it doesn't really seem to address all of the other issues that were raised, like the communications plan, there are a lot more people who have deployed DNSSEC now than originally and the communication plan says we will talk to technical people but it doesn't really talk about all the other people. There is very little discussion on stuff like how actual breakage will be measured, what the metrics will be, there is something on the slides about we should figure that out but there isn't anything properly discussed yet. Hopefully the next version will have it. There is a bunch of other concerns, like emergency key roll which is just sort of being ignored, from what we can see, and I realise that isn't the design team stuff but every time I get a chance to come up to the mic I am going to point out all of those issues.

JAAP AKKERHUIS: I cannot channel the whole chain, this is more a questions to ICANN but I do know if you have looked at the comments of actually the whole outreach is one of the things that really is ‑‑ asks for that, not all the public comments has been ‑‑ have been actually ‑‑ have been addressed yet but the idea and that is from the design team is yes, there will be actually more of that, some of the things that came into in the public comments has been discussed inside the whole design team but it never ‑‑ we probably should do that as well to prevent other questions Warren: I think what a lot of people are feeling this ‑‑ 5011 we have got a solution for this keeps being discussed but ICANN is being strangely silent on the rest.

ANAND BUDDHDEV: From the RIPE NCC. I have a comment from math ICE via IRC, he says: If something goes wrong and you want to restore the outgoing root key you better do it before implementations have removed the missing key. And he has an additional note and says: Unbound's default for this is 366 days, that seems safe. It might be worth checking what other implementations do.

JAAP AKKERHUIS: May I speak first to the fact that what ‑‑ if it's not in the ‑‑ in the zone itself, I mean, the implementation might still lying on disc and still use it, the default is a year at a ‑‑ so yes, if it goes missing then you can still use it, and solve stuff. And I mean, validate things, that is how that ‑‑ I don't know if other ‑‑ I actually don't know what is in the 5011 saying about this period.

SHANE KERR: The future is hard to predict. I may have missed it so I apologise. Is there an estimate for how long the roll itself is going to take?

JAAP AKKERHUIS: Well, there are a couple of things making it kind of difficult because you don't want to change the current key signing ceremonies so you have to work with these periods and make it kind of long. I think it's about six to nine months in total including the removal. But there is somewhere schedule of the time line, but actually, that's the plan which needs to be made after. I mean, this has report has been published.

SHANE KERR: Sure, sure. I am thinking in terms of a lot of software packages that have the key included in them, I guess it's going to have to be done as a security patch for a lot of these which is weird because nothing has been broken but it's the only way to get changes into like enterprise, Linuxes and things like that. OK.

JAAP AKKERHUIS: Provided they cannot be configured using 5011

SHANE KERR: You don't install a Linux enterprise because you want to reconfigure.

AUDIENCE SPEAKER: This is a comment, and as a member of the design team I must admit I am appalled by the DNSSEC standards that came out of the IETF. The description of the way in which you maintain keys, you have a relationship with relying parties, the way you can change the key material at the root, is nonsensical, if it's there at all. It simply said oh, it's just the DS record of the parent and that is all the standards ever said. When the IANA was pushed into putting up a key at the root. Nothing else. What about an emergency roll? If you roll that key in an emergency, every single relying party has got the wrong key and there is no way of fixing that. Even this planned roll is a roll into disaster. There is no emergency procedure that will work. Right now, we have no idea of understanding how many resolvers that ask authoritative name servers are using the automated old signs ‑‑ old signs new, using 5011. We have no idea. There is no emergency key in preparation, there is ‑‑ all of this stuff is sitting there in a vacuum, so you are watching your mail and this is great. The only way we do security right now on the DNS is with this CA stuff and every single time a CA gets busted and we print a new fake certificate for Google you should look at your on line banking and quake in fear, the current security framework for the Internet is bullshit. The only known way we can fix this is to tie in a better trust system, and this really impinges upon the use of DANE and DANE relies on DNSSEC, so we need this shit like yesterday, and the problem is, that most of the standards are just missing and most of the attention from the industry is missing, you are reading your e‑mail. That is fine but when we go and break stuff next year at least we will break it, I won't, you are going to be in a lot of deep water and folk are going to be shouting at other people, look at yourselves, you are the problem. If you don't take an interest in it now and figure out that the signalling is wrong, the standards are actually not right, there is a whole bunch of missing stuff out there, and we are ex ‑‑ supposed to design a process that minimising the damage, this is mission impossible. Thanks.

PETER KOCH: We have some issues. I want to clarify one thing or have it clarified by you, I guess, by somebody asked how long is this going to take. There is an interesting thing happening maybe in September next year regarding the oversight on layer 9 and 10. Does that time frame in any way, shape or form influence the work of the design team and the time line of that?

JAAP AKKERHUIS: I am not aware of anything about that. It's one of the things ‑‑ that it should not hamper the current operations in any way. This is what SSec has said, what other people have said as well. So even if the transition will happen in the middle of this, I mean, it should just ‑‑ things should be solved in some way.

PETER KOCH: Thank you. Which brings us ‑‑ thanks, Jaap.
(Applause)

JAAP AKKERHUIS: This is also part of the path.

PETER KOCH: And so with that ‑‑ you don't need to sit down, stay here, because we are doing something new. Final thing for me to do almost, you remember that you accepted a procedure for Working Group Chair election or appointment, I should say, and that was started by Jim on the mailing list with some candidates coming up. I was stepping down or making the space as the first of our three, which means that, you know, need to find my replacement and with that because the rules don't allow me to be involved any more I'll hand over to Jim.

JIM REID: Thank you very much. Well, first ‑‑
(Applause)

First things first, I would like to say thanks to Peter for all his work for the DNS Working Group over many, many years, it's gratefully appreciated and I will be more than happy to buy him a beer afterwards and I hope you should all do the same thing, too.
(Applause)

Now, we don't have votes, we ride to work and do ‑‑ bottom up processes as far as we can. We had two candidates nominated, if you can call it a nomination process, to take on the co‑chairs responsibilities, Andre ‑‑ from Czech net and Dave Knight from Dyn and although the number of expressions of support was not as great as we would have liked to have seen, by far the overwhelming consensus view, I think, was that Dave Knight would be the Working Group's choice as the replacement co‑chair for ‑‑ I would like Dave to come forward, I think most people already know him and invite him to join the cabal of three.

DAVE KNIGHT: Hello.

JIM REID: With that, we are now done. I would like to thank you all for your attendance today and all the folk at the NCC at the back and the video comments and everything, the scribe, people taking notes of minutes, and the stenographer who has done a fantastic job as always dealing with my mangled English. And I hope to see you all in Copenhagen next year.
(Applause)

Sorry, any other business?

AUDIENCE SPEAKER: DNS.BT so we are on the process of entering DNSMON, and we were told by the RIPE NCC that we could only get in after the new policy was approved, and we would like to have a feeling when that would be?

JIM REID: Well, we have got two ‑‑ one is to do with issues of secondary ccTLD services and another for DNSMON so there is a document ‑‑ I am not sure if it's been posted to the list yet ‑‑ it has, so we are waiting to see if there are any kind of comments on that document and I think we are now in the stage, I think arbitrarily, two weeks for comment. This has gone on for long enough and we have to have this reach some definite conclusion. We are not having these things being done as policies as such, just going to publish them as lightweight documents, they are not policies that need doing through the PDP because we don't think this is a policy matter in the same way Address Policy is; leave this for a couple of weeks for comments. But if no one says anything by pretty much early December, we will get this through and that will be this place by the time of ‑‑ turn of the year at the very latest. OK? Is there any other items of open business?

PETER KOCH: Of course I need the final word. Because I did this kind encouragement and applause so on and so forth, I would like to thank you all and all your predecessors for the opportunity to serve the Working Group. I will threaten you I will stay somewhere and I would also kindly ask everybody if you follow Jim's recommendation to buy me a beer, please don't do this all at this meeting. Anyway, see you next time.
(Applause)