MAT Working Group session
18 November 2015
2 p.m. MAT Working Group
18 November 2015

CHAIR: Welcome to the MAT Working Group. Well, I guess I am Christian Kaufman, same guy all the time. We actually hired a second co‑chair, well you elected one, Nina, but Nina couldn't make it this time so it's me again. I was wise enough to Shanghai Gert in case I need support, to thanks Gert.

We as usual have a scribe, a Jabber and a stenographer, which I believe all in the same row. One short point to the microphone edicate. We have the usual couple of presentations. Then time for questions. So, when you go to the microphone, because we are recording that, then please state your name and affiliation so that we have an idea who you are.

Next point, the minutes from RIPE 70. They were published quite a couple of months ago. Are there any objections to not approve them? Are we all fine with the minutes? No objections, okay, then the minutes for RIPE 70 are approved. Thanks a lot for that.

That brings us to the agenda.

This time we did a little bit of, well an experiment to a point or something which we haven't done before. We will talk about a topic number D, where the Anycasters which was actually already in the Plenary on Tuesday morning, so, yesterday.

So, the difference today is ‑‑ and this is the first time we tried it at least in the MAT Working Group ‑‑ is that the generic concept, what it is all about, was in the Plenary. The more technical part, how it is done and more details are discussed or presented today in the Working Group. So, it would be nice if you actually, when you make the survey or rate the presentations, if you can give us feedback if that is a good idea. So there should not be much overlap, so it's not repeating the session, you'll probably already have heard but explains it in more detail. It would be nice to see if you guys like that or if you think we already had that topic so don't explain it in detail, don't repeat anything which was already in the Plenary or you could say this is brilliant because I really liked the topic and now I can go more indepth and see what this is all about.

So please give us feedback on that part.

What was not an experiment and not originally planned is that Dario has to talk twice. That just happened because the impact of carrier grade NAT was originally done by Marco. Marco couldn't do it so thanks for Dario to jump in.

To give him a break we basically have split the two slots so he doesn't have to talk for 4 minutes straight. After that we have RIPE Atlas news from Vesna and in the last five minutes, we have the RIPE Atlas hack‑a‑thon, Sacha, is hopefully in the room, he is the winner of the RIPE Atlas hack on this and he will present what it was all about.

Without further ado, I give the microphone to Dario for his first presentation.

DARIO ROSSI: Hello, thanks for the introduction. So that I can keep to the technical content, so the background I'm not it shall it is not so important to bring it here because you are well aware of that so this morning there was a session about policy allocation so we all know that there is an Internet address shortage and there will be ‑‑ the shortage will be increasing. So, what is the impact of the shortage is that now there is a cost which is associated to addresses and the ISPs need to bear this cost. And the cost is something that ISPs want to reduce, so, what are the possibilities to reduce these costs?

On the other hand the v6 and v6 again there was a lot of talks yesterday in the Plenary to see what is this current adoption, and there was one talk this morning about what is the IPv6 in Romania, which is according to the slides presented this morning and the number provided here for the average sort of the same. So 6%. 6% is not going to solve the problem. So, what is going to happen is that until the new resource ‑‑ the shortage of resource will make it such that IPv6 will be more economically convenient and it will be probably a transition to that technology. This is what should happen in an market in economical terms, aside from the technical side, which is sort of what you are going to discuss more here.

So, what we can do in the meanwhile, while we wait for v6 is to deploy technology such as carry yes grade NAT which is provide a temporary patches solution. What is a NAT? Everybody knows. You are in a home you have a number of devices. I don't know if you see the home any longer. I don't. But while you are at home you have these devices that are connected to a multihome NAT and this is basically a device which is masquerading the private addresses of these devices with providing a public address to the Internet. And this helps give connectivity without having one addresses per device. Now if you scale this up to a number of households, you basically get the idea of a carrygrade NAT which is a different protocol. It's protocol NAT 444 and this is the topic that we're discussing.

So, for sure, carrier grade NAT has some implication. It will break the end‑to‑end connectivity. It may introduce reachability problems for NATTed devices so cannot be reached from the outside until you open our connection. If you want you need to do, to enable port providing, this is possible on your home net but if you don't manage yourself the carrier grade NAT, then this is something you probably need to do in some way.

You need also to keep stateful information in the NAT, which can can exhausted in case of different connection matchment policies. Then there are also impact on possible lawful interpretation. But our focus in this work is what are the performance implication that may happen in practice.

The kind of question that we want to answer is, with measurement, do we see that carrygrade NAT has an implication on users and experience. Other side of the question is there any benefit in having a public IP for users or is it any disadvantages of having a public IP? And a side question is for an ISP which is ‑‑ so in saving some money, how much we can expect to save in terms of number of addresses that we can save we are queues using with Carrier‑Grade NAT.

To go through the methodology. We are doing large scale passive measurement from a real ISP. So the result will be pertaining to a specific ISP with a specific Carrier‑Grade NAT deployment on some DSL lines, so these are not brought to the whole community which is run here. There are many more ISPs that the one to be monitored. So what we are going to present is more a methodology than the actual result. The actual result will be confined to what you're looking at.

In this case, will he ever averaging a number of statistical tools. What is our point is that we want to see broadly how these things perform. So we're going to compare to current population. One has public IPs and one that has private IPs and we'll do it in a statistically relevant way. And it's as simple as possible to possibly just sent the information about whether do we see any differences in carrygrade NAT or not.

And given that HTTP is the thin waste of the Internet because most of the protocols that goes through the users are done other an HTTP connection, we will be focusing on web performance.

Then in the paper we also have other kinds of protocols but here in this talk I'm going to mostly focus on that.

So our scenario is basically the following, we have an ISP POP. The POP is both the population of private and public households. These are going to connect in the Internets in case of the public address they are going to traverse a route, in the case of a Carrier‑Grade NAT they are going to traverse a different NAT. So, what we do is that we have a passive probe that is measuring traffic and analysing it. And we are carrying the study over one month of real data where we have 17 thousand households which are almost equally split in the two population. And just to give you an idea of the numbers. We have a number of 1.7 billion TCP employees, around one third of which are HTTP connection.

So what are re‑looking at in this context? We are defining a number of statistics. Some which are very simple. If you are opening a connection at our passive probe what we can measure is for instance, things like the run through time, the time to establish the connection to the freeway and shake time, and we can also define some matters which are closer to the user, so whenever the user opens a new connection he does so because he clicks his own links and he wants to receive some content. So we can measure the time to the first byte which is the labses between the request, so the HTTP get, and the first bit of the request came into the first data packet that comes back. And we can also define other metrics such as for instance the throughput in case the object is large then you are have had in getting that object fast, not just getting the first bit of it in an interactive fashion.

Then we define a number of metrics, and what do we want to do is instead of going to look at the very fine grained for each user, what are the statistics, we want to construct some probability distributions for each of these metrics and conditioning over whether the user had a nice private or public address.

So, now we have a couple of distributions and what do we want to do is we want to compare some somehow. Now, this is one metric that we can use to see the statistical index that we can use to assess the differences. This is inspired by the Kullback‑Leibler divergence. Given a distribution of P, what is the amount of information, in order to pass from P to a representation Q where P and Q are the probability distribution for private and public addresses? So somehow here we're waiting how much different content there is, how much is the difference between the performance we are seeing irrespective of what is the actual value of the threw put for instance.

So this is one of the different metrics, actually there are a bunch of them. For instance, this is the generalisation divergence. There are a some of statistical metrics we could use, leverage, there is total variation, there is Kullback‑Leibler, there is a number of them. There is an inter dependency between those metrics so anything that I'm telling later in this talk I'm going to particularize it for this metric which we collected due to a number of reasons, I don't get into the details. But I'm happy to answer if you have a question, but just to say that this is one metric that we can use in order to make this difference appear and the way in which we make this difference appear, I'm going to explain to you with an example.

And the example is going to be pretty simple. Imagine that you have got this distribution, these are distribution from an significant controlled probability distribution function which is a negative exponential distribution. We have the red one which is 1 parameter, then we change the different parameters from 1 to 8 for instance. Now, if we are willing to compare this core, the red core for instance, and one of these curves. One of the things we can do is compare the statistical index and we'll have a single number. This number is going to be J as divergence which various between 0 and log 2 so we will have a single exact number that will tell us the difference.

Now, how much this number needs to be different in order to be able to tell us that there is a difference between the two distribution is statistically significant so it's worth looking at. So, here you can see there are a couple of distribution which are very close and these are those that are in this region, so basically we are defining simple threshold under which there will be no important difference with respect to neck tie, there will be some difference that we make the metric work investigating further and there will be some largely different regioning which the probability distribution function are very different, very far apart.

Of course, then we can set the threshold in some specific way. If we happen to know for instance that going from here to here, we have a change in the main opinion score, then we can set the thresholds specifically to capture this kind of thing. This mechanism is quite robust and this is a mechanism used in order to avoid just to have two sets, so the world is not white and black, you have also got intermediate grey and this is precisely this kind of region.

So, we can start, for instance, to say ‑‑ to look at what happens when we look at the three‑way handshake. We have private addresses, public addresses, we are taking all the remote servers, we can also condition only on specific servers, like Apple user are going to download their new application and so we can check, easily the difference when they are using a private address or a public address, when they are downloading. And we can do that for Google searches. So that is sort of interesting for the volume and that is sort of interesting nor the interactivity. And these are the distribution and here is the indexes we're talking about. So, these are fairly close, so there are no statistical differences, so we could use, instead of going to look at this kind of plots in detail. We could only have very course indication of everything is going correct or not.

And we can do that for different things like down loud throughput and again getting some results, where here we start seeing some discrepancy arising but of minor importance. And then okay, where does the only expected difference arise? Whenever you count the number of hops for instance, which is a metrics that you can easily look at, you count the TTL field, if you traverse a Carrier‑Grade NAT and you take a different path you might expect that you have a different number of intermediate devices, which is sort of what you are seeing here.

So whenever you are between a public and a private, what you see is that in the private case you are going to do a number of additional hops, and this holds whatever servers you are looking at in this case
Basically, okay, this is not expected not worthy differences, and so, this is sort of an indication that our mechanism is working and then in the other cases, the differences we're seeing were not so large.

So, if you want to provide at a glance all the results, so with the threshold that we were defining, these are the metrics that we are seeing across all the servers for a couple of specific important services which are used by large population. What we basically see is that with the exception of the number of hops, where we do see some statistically significant differences, for all the other services we have most of the, most of the statistical differences, very limited. So answer to the first question, does the carrygrade NAT affect the user experience, well, the condition to the fact that we are serving in a single operator for a single point of presence on the metric that remember observing, that we are seeing anything significant impact so the user should be happy irrespective of whether they have public or private address. So, is there any benefit associated in having a private IP or is there any benefit in having a public address?

So for sure, having a public address there is an obvious benefit, which is if you want to run a service, then you can. And it's going to be much easier. So, what we did was, to find how many customers actually would be ‑‑ are running a service, we are not looking for peer‑to‑peer traffic. We're looking for protocols such as HTTP, I map, POP and SMTP, possibly over TLS, and here what we see is that in the amount that we are considering what is the cumulative number of server that is we are discover and this is the number per hour. You see that the number of servers in that POP is fairly low, and that still slowly increasing after one month, it remains low. Okay. So in that population about 8,000 users had a public address but basically only 60 of them were using a public address. So, that is sort of saying well, okay, that might be a case for using a Carrier‑Grade NAT because apart from these 60 guys running services, the others we don't notice in terms of performance implication.

So what is another difference is that if you have a public IP address, then there may be also not nice guys, they may try to contact you with unsolicited traffic. Of course, this is much more easy if you have a public address than if you have a private address because in this case the Carrier‑Grade NAT will filter this traffic out for you. In case if you are a private address, there may be some bad guys contacting you, but it's going to be guys which are in the same ISP and that can contact you through these private address. So, what we do in this context, just to have a quick evaluation of the difference in private address versus public address, as far as unsolicited traffic is concerned is to compile a list of remote attackers, we define attackers just by seeing any guy that attempts to make more than 50 unsuccessful connection attempts to any of the private addresses, to any of the addresses in the POP you are serving and that's we're defining that's attacker and then we're counting how many of these guys are contacting people inside the network.

So, here is the top ten ports I believe. We have a full recollection, but it's just to show you something. Which is for instance, these are private victims, these are public victims. You see that there is an zero number of victims, which means that there are some people inside the NAT which are doing scan, so they are also able to reach guys which have behind the Carrier‑Grade NAT. But the most important take away is that there is something like orders of magnitude between the two, so if you have a public address, then you are exposed to changes, to attempted connections from everything in the world, while here you are only he can possessed to connection attempts that come within the ISP network. So in this case, we have seen that is there any benefit in having a private IP? Only a small percentage of traffic, of users would need that IP in order to run services. And for the rest of the people, well if they have a public IP, they will be orders of magnitude more likely to be receiving unsolicited traffic, which they may not want and they don't need either because they are not running a service.

So, just to quickly end up in okay, now we have shown that there is a case about Carrier‑Grade NAT. What is the expected benefit? What we can do is that we can turn to look at how many active connections there are, and here we assume that there is an idle timer of 5 minutes after which state the NAT expires. And what you are looking at is you are checking what is the number of active users, those who generate one connection or send one packet during this five minute interval. We are counting the number of concurrent connection we see versus the customer basis of 17,000 user. So this is a couple of days into the month that we are looking at so they all have the same behaviour, which is to say that the number of concurrently active user is far lower than the user base in the POP. If you just look at this number, savings Carrier‑Grade NAT could reduce by sort of a factor of 2 the number of active IPs that the ISP would need, but we can also do something like using port address translation in which case what we do need to see is what is the number of concurrent connection on different ports and I'm not sure if you can read the graph because I cannot see it very much from here. Here we have the 99 percentile. The 99.9 percent percentile and the maximum number of current connections. The main takeaway is that there is some potential benefit. Of course in the worst case there will be one guy using all 65K ports. It's going to be very unlikely. And so in the typical case, there will be some potential order of magnitude of ‑‑ at least a factor of 10 weren't exploited.

And for TCP and UDP where you can see the little bits of the same thing, but with more spikes probably due to the way in which we do time‑outs in that.

So, this brings us to the conclusion.

So, our initial goal was to say there is Carrier‑Grade NAT, it's becoming a reality. What we want is to assess the impact of CGN on user web browsing experience. If we deploy is a CGN is to reduce cost but without hurting the performance at the same time. Otherwise the user will go away and we'll loose income for the ISP. We define multiple performance metrics and we compare them with some statistical indexes. And as I said, in the, in the data set we had, we didn't see any noticeable harm, only negligible impact were observed. And we also saw that implicitly there is some positive benefits to the fact that not being able to be reachable from the outside which is cutting down the amount of intrusion detection attempts along the user would see. We can see that there can be some potential saving in terms of money. But, still, this is going to be something like a temporary patch until IPv6 is going to take off. Which I'm not able to expect Tor forecast when that will happen. So, that may be work for an ISP in the meanwhile.

With this I conclude, I am glad to take any questions, if there are any.

CHAIR: Thanks a lot, Dario. Any questions? Comments...

AUDIENCE SPEAKER: Alexander Lemmen, and the comment would be just this year there was some research which hypothesizes that IPv6 is faster than IPv4 because IPv4 get bogged down by NAT performance. So, you just proved this paper wrong.

DARIO ROSSI: I didn't prove this paper wrong. It proved in our setting with the Carrier‑Grade NAT equipment that this provider has provided its own facility and under the metrics we consider that this guy don't see any performance impact. I'm not sure about that other set. If you pick a Carrier‑Grade NAT that drops packets and it's a cheap partner, my contribution would be in the methodology in order to assess whether the crappy equipment is the major fault. Okay.

AUDIENCE SPEAKER: The crappy equipment will affect IPv6 and IPv4; it doesn't matter which protocol you use.

DARIO ROSSI: In this setup no, because in this setup no because ‑‑ but I have to go back, back, back. So, here, for instance, so they are taking two different paths and only the category guys here, those with a private address are going to go through this guy. So if this is the culprit, then this methodology will reveal it because what you are doing is that you are conditioning performance of these guys and performance of these guys and the paths are independent, and if there is any statistically significant difference, then it must be routed in this equipment.

AUDIENCE SPEAKER: Nice correction, thank you. But anyway you've done right, NAT ‑‑ of course it might some delay but it would be near triple if it's done right. That's what you prove. Yes, correct?



GERT DÖRING: On the Carrier‑Grade NAT. I think this interesting result, interesting methodology, I am a bit surprised to see that it has no impact. On the other hand recollect that's not necessarily bad news. But then Carrier‑Grade NAT scale up to some certain amount of parallel flows throughput, and these guys are lucky and are only filling 90% then I would expect it to not have that much impact. What I'm more interested in actually is‑‑

CHAIR: Afterwards can we close the mikes then?

GERT DÖRING: It's not something you can answer offhand but maybe something for further study. This is non HTTP things. Because HTTP I think is a very easily handled protocol, because it opens, transfers, all the time. Closes down. What I have heard about users in Germany sitting behind non voluntary carrygrade NATs is that they have massive issues with SIP, because the SIP register goes out to the SIP provider, and five seconds later the Carrier‑Grade NAT expires the session so when the call comes in the CGN has no idea who the SIP user. So SIP is working very ratically because the sessions expires too fast. And this is actually sort of a very difficult protocol to handle for a NAT because it's UDP, you don't have the TCP fin and there might not be any activity for minutes to come. This is sort of the worst case while HTTP is sort of best case. Definitely something for further study and not to be answered right now. Thank you.

DARIO ROSSI: Thanks for the comment. Just to comment on this data set. The operator is using VoIP so the user is not using much SIP so we couldn't really use this data set but definitely something worth studying. We didn't just look at HTTP, but also peer‑to‑peer traffic including UDP, because the BitTorrent is carried over UDP or led width. With quick something may change, so definitely this was more about methodology and study on some protocols, but I'm talking the point which is very interesting, thanks.

AUDIENCE SPEAKER: Hi. Mang Judean from Sweden. Do you base your figures all about this 0.6 percent that actually needs a server at home?

DARIO ROSSI: What do you mean?

AUDIENCE SPEAKER: You scanned for HTTP, Imap, SMTP? Most people can't use it at home from the beginning.

DARIO ROSSI: I'm totally with you. So this 0.6 here, this figure here. In this 0.6 is basically this number divided by the user base. The main point in the methodology, this was sort of complimentary study that is really went only for this user population. So, it's more a comment. It's not the main point of what I was talking about.

AUDIENCE SPEAKER: Okay. I just figured that ‑‑ okay...

AUDIENCE SPEAKER: I would like to add a plus one on the request for a more study on different protocols. I'm not just thinking of SIP or VoIP in general, but also for instance, the gaming community uses local services just for setting up sessions and things like that. And that's what I'm usually afraid of when I see people talk about just HTTP, and so you did look at peer‑to‑peer traffic which is another big concern also because of the number of streaming services use peer‑to‑peer.

DARIO ROSSI: We will look at the throughputs we were also trying to establish some correlation among flows, because I mean single flows, the study that we did was well we don't see any impact on single flow. Now we would like to have something which is more closer to user, like okay what is the establishment of a SIP session, so could recall laying between SIP and VoIP. But that sort of thing ‑‑ in the case of SIP we could do it. In the case of gaming we are not equipped because we don't have the sector for those protocols, so that would be a lot of work. But if there is something which is easier, because they are already ‑‑

AUDIENCE SPEAKER: I was kind of thinking that maybe you could just see if you can detect failed attempts of connecting at the CGN point, but...

DARIO ROSSI: We don't control the CGN, so we just have the passive probe. I agree that if we could also have a view of internal counters, to correlate with performance, that would be much easier. But it's not what we have. So, we must live with this more or less intrusive methodology which is more limited in the kind of thing that you can see. Otherwise, thanks for the gaming. Then if you have comments about what are the metrics or things that maybe are relevant, you can take it off line and I'm happy to take notes.

AUDIENCE SPEAKER: Robert Kistecky, speaking as an individual. Can you go to the last but one slide please? Conclusions I think. So, I would like to applaud you for the ‑‑ picking those words, positive side effects and you stayed away from features. This discussion has happened a number of times. I don't think we could consider that to be a feature. If I want to run a firewall, I will run a firewall, I don't like my ISP making that choice for me. But more to the point, have you considered that with CGNs, more and more people will be less Joe locatable to put it that way, so basically, CGNs and content providers will have a harder time localising traffic to me because they no longer know where I am.

DARIO ROSSI: I'm not a DNS expert, so don't kill me please.

AUDIENCE SPEAKER: Me neither. But the fact remains that all of a sudden I seem to be using the same IP as you are although we are really really far away potentially.

DARIO ROSSI: But there is EDNS, which is ‑‑

AUDIENCE SPEAKER: There is also an evil bit but people don't really use these things.

DARIO ROSSI: Because in the measurements community this seems to be something that is being increasingly used. This is because maybe Google is using it. I'm not sure about the penetration, this for sure if you add a level of indirection, then I agree with you ‑‑

AUDIENCE SPEAKER: Maybe it's a further study to actually look into this and how bad can it be for a client who is suddenly connecting to this a different CDN, a different instance of a CDN for example, and no longer has the to me second connection to Google.

DARIO ROSSI: To partly answer, I'm not so sure if we have the one three time plot. What you are saying basically is whenever we are making a request, the the guy sees me with the IP address which is the address of the Carrier‑Grade NAT, then he is going to redirect me to somewhere else. If that was true then I should see a three‑way handshake time which includes round trip time which is largely different ‑‑ with a large difference, right. And that three‑way handshake, I'm not sure what we do see ‑‑ is this thing here.

AUDIENCE SPEAKER: I guess the smaller the ISP the less problem this causes but in the case of Comcast who is actually, as we speak, aggregating all of their users into like 24 or so, exit points, instead of how many hundreds they have so far or had so far, it's going to cause problems.

DARIO ROSSI: Yeah, for sure this vantage point was in Milan close to the mix where I guess there is a POP, to, yeah, maybe this is reducing the what we see from larger ISPs.

CHAIR: Thanks a lot.

CHAIR: The next one is Brian, and we are already very bad in timing.

BRIAN TRAMMEL: So, Hi, my name is Brian Trammel from ETH Zurich and the mPlane project. I'll be talking with work done together with all these people, talk about Internet path transparency research in the mPlane platform. First ‑ and I apologise for the wall of text and I'll get through this quickly.

When we say mPlane is a platform. What are we talking about? Dario, this work that was done with the Carrier‑Grade NAT and also the Anycast is a lot of using, it stat and other tools that are part of this platform. Underneath the platform we have a self descriptive air tolerant RTC protocol that connects client to components. A client is something that wants a measurement done. A component is something that provides a measurement. The protocol is based essentially on exposure of capabilities. These are the measurements I can do, the clients then sends specifications down and say okay, well please do this measurement for me.

Great. We reinvented RPC, why did we do this? There are a couple of things that we wanted to experiment with in this project, one of them is weak comparativeness, so the idea in controlling measurements on widely distributed extremely different components, you don't want to have just a command interaction. You don't want to say do this. Because maybe I don't actually know that you can do it. Maybe the measurement is running on a piece of JavaScript in a browser or it's running on a mobile phone which may lose connectivity instantly. Everything is in terms of polite requests as opposed to commands. If would be really nice if you would do this measurement for me and at some point in the future if you could get back to me with what you might have measured that would be good.

The other one is the idea of using the schema to define the measurement to perform. So basically instead of saying that I want you to run measurement ping, I say actually what I really want for you to do is to give me two‑way delay and I want you to measure that two‑way delay using ICMP. Instead of saying imperatively this is the particular tool or particular instance of tool I want to run, I say these are the properties of the data that I'd like you to give me. And then it's essentially up to the component to figure out the way to do that.

So this client component interaction where you have one thing talking to another thing. This can be federated into larger and larger and larger infrastructures by having a device called a supervisor which essentially is a client in one direction, a component in the other direction which can aggregate and do access control on behalf of a client and coordinate the work of a bunch of components.

How did we apply this? In this talk we'll be talking about path transparency. Path transparency in one slide is the recognition that the Internet is not end‑to‑end. And has not been for a very long time. Some of this is policy. Some of this is ISPs reacting to the conditions of an Internet that is no longer like it was before the endless September. Some of this is accident, a lot of this is accident. A lot of this is boxes designed to be sort of back‑to‑back user agents or proxies in areas which basically keep new protocols from being able to run over them.

In the protocol design community, we kind of say, okay, well, now we have to deal with NAT. Well, okay, the IETF did kind of sort of institution stick its head in the sand about NAT for a very long time. Now I think we have come to sort of the opposite side of that swing. Where we're saying okay, well there are NATs, there are these middle boxes, devices in the network, the Internet is no longer end‑to‑end, we can't have nice things any more. Everything is going to run over HTTP forever and that's just how it's going to be.

Some of these paths are worse than other paths, right, so if you say well SEDP won't run on the open Internet. It won't run on NATTed paths, it won't run on Carrier‑Grade NATTed paths unless they know something about SEDP, but some paths will work, if you do have an address, if you are on IPv6, if you are actually have reachable end‑to‑end, the ability to run new protocols is there. So the goal of this work is to figure out how bad the problem is, and where the problem is. From an operational standpoint, the same tools could be used again as another trouble shooting thing. I'm trying to run this am acation, I'd like to do some A B testing it figure out if the problem in this application running is actually dependent on something that's on the path that's messing with the application itself.

So, the use case that we looked at congestion notification, everyone here knows what ECN is. Anyone not know what ECN is? So basically this is an attempt to replace loss as a congestion signal for TCP. It was defined in 2000 and a bunch of people turned it on and horrible things happened. (Bunch) lots of routers, if they saw an EC, it marked packet they would just reboot. Like, a single packet of Dev is not a good thing. So, that was back here in the 199s, there is a little bit of a take off and then boom the crash. This problem has largely been fixed because there's a very strong incentive to replace routers that you can reboot when you send a single packet past them. There is some other problems where basically two bits that ECN uses for its signalling were taken from the to say bytes so there is a lot of equipment out there that treats that whole byte and if it's used April too far service and doesn't split it up. So there is a little bit of ECMP madness that happens here. However, a few years ago, about 2008, you can see at this point right here before it goes up, the Linux kernel defaulted to negotiate ECN if it's asked. So if an active opener says please speak with me, the passive opener, the server will say okay yes do that. You can see this is the proportion of ECN supported in the sop million web servers and we are actually the last time I measured it we were about right here, so this software linear thing is happening. What this is is this is people who are running Linux as a web server, simply keeping their kernel up to date. Nobody it turning on ECN here. This is just a side effect of good management practice. The question is, since we see that this work really, really well for turning ECN on the server side, is it safe to turn it on by default in client operating systems?

So, we built a tool based on mPlane to figure this out. It's called path supervisor. What it does is takes a web server, we're only going to look at one in this diagram, and it attempts to connect both with and without ECN, and sees what happens, right. And the idea is basically most of the cases we expect that if you try to connect to a web server using ECN and it doesn't answer, that's going to be a server dependent as opposed to a path dependent problem. So it's not somewhere out alone the path that is dropping that packet. It's the server itself has decided okay, for, or a firewall in front of it has decided we are going to disable ECN explicitly.

We have multiple vantage point to check this whether it's path dependent or end point dependent thing. Then on case is where is we do find possible path dependency; it triggers another mPlane component which is wrapped around trace box, anybody used trace box before? Trace box is a utility, essentially it does ‑‑ it allows you to modify bits of the packet header and then run a trace route and look at the packet headers that come back from each hop along the path to figure out which box along the path actually modified the header. It's a cool tool.

The main advantage of doing this in mPlane for this particular research study was that we already had this trace box component. So this integration took a masters student a couple of hours to do.

So, results: Basically, the question can we use client defaults to drive ECN deployment? The answer is yes. It's safe. We saw about half a percent of the Alexa top million refused to connect with ECN, but if you do a happy eyeballs type thing or a fallback say okay I'm going to attempt to negotiate ECN or not, almost all of that goes away. There is this simultaneously percentage of sites which might have more interesting impairments. We did see some evidence of path dependency. These numbers are both dwarfed in terms of the Alexa top million by other types of transient connectivity failures or problems with say for example IPv6 configuration or IPv4 DNS configuration. So this is turning it on doesn't actually cause a noticeable connectivity problem in the terrestrial Internet. I notice that basically as a result of this research we were talking with a major operating system vendor about actually turning this on in the client side to drive deployment. Apple did this in development basis an of IOS and OSX. This was something that have presented at the a couple of weeks ago. And basically found that yes, as we did here, terrestrial Internet access is basically okay. The mobile Internet is less okay, but these impairments are primary access network depenalty. So it depends really on the mobile carrier, its devices that are part of sort of a mobile carrier's access paths that are causing the problem if you have say, well, this is broken, you can cache the brokenness and use it on networks in which it's not broken.

So, we actually hear people when you say yeah you can measure things about the web but the web is only one application of the Internet. What about client machines? What about other sorts of connectivity? And here we essentially then took sort of a path spied err component and hooked it up to a resolution component that instead of taking things from the list of the top million Alexa web severs, just harvested client IPs from the BitTorrent distributed hash tree. So, basically what we do here is we just put, we asked for fake hashes out of the distributed hash tree and used that to walk all of the clients that each hop into the distributed hash tree knows about. And we found here that yeah, basically the results are comparable.

So, if you are interested. You are sold, tell me more. The mPlane SDK, the software development kit for building stuff on top of this platform is available here in GitHub or there is a Python module. If you are interested playing in path spider that's here in GitHub, but note that there is a different branch of mPlane SDK that that requires.

Pathspider right now is focused on the testing but is designed in such a way that it can be used for any A B test. If I try protocol feature A, it's actually A and not A is what we're interested in I tried to connect with the protocol feature, I tried to connect without the protocol feature and then we look at the differences in connectivity. So thank you very much. With that, I'll take questions.

CHAIR: Do we have one or two people for questions? No. Okay. Then thanks a lot Brian.


Good. That brings us to the little experiment with Dario. Where are the Anycasters reloaded?

DARIO ROSSI: Like in all good movies there is a sequel.

Just to, have an idea, how many people did follow the first presentation? Okay, so, good enough.

Also because this one basically, I do not have only a very brief recap about the methodology, but then what I want is just to add a little bit more about the technical details. Where mostly I'm going to be telling you about result about targeted deployment where we have ground truth. Because in this case we can assess something which certainty, which is good and I'm just commenting something about how do you want to do things when you have a really lot of deployments? Remember one of the things we want to do is for instance use this technique ed scale in very fast turnover to detect BGP hijack detection and for this we need to do really, we need to move Internet scan from hours to minutes. I want to comment just why we believe that this thing is possible although we don't have any results yet.

So, anything that, that we have seen in in talk it available. I forgot that I have a we are interface with a demon, I wanted to give you a demonstration but I don't know how to interact with this terminal so I'm not so sure that I can point you to a web server and show you the results in a Google map, but you can do you it in this address is going to be fairly easy.

So, it's a methodology refresh. This is the workload, so, we measure latency to this, any time two of the disks doesn't overlap, then this means that we have speed of modulation which is already implicitly pointing to Anycast. And then if you want to fully numerate we need to solve the optimisation problem, that we can do in the escapability, because the greedy sluice is good enough and I'm going to show you in this thing for instance. We also said that well, north for the geolocation to happen then you need to filter latency noise. We are going to see what is the amount of noise in the measurement typically. And then we can iterate. We are saying well we want to do this very fast and one of the possibilities to do it fast is for instance, in order to do BGP hijack detection is not to use the full set of vantage points in order to do the full numeration, because what you need to do is detect whether there are a couple of separate circles. In this case you might not need to use all the 9,000 RIPE probes or 300 planet probes. You just need far less of them. So this may be gaining already a factor of 10.

In order for the remaining factor of 10 that we need to go from hours to minutes, we are going to have to wait until the end of the talk.

So, then I was showing performance in the performance were conservatively estimated so 75% recall, 75 were positive. Protocol agnostic. I have a little more detail, which is, for instance, the number of vantage point that we are using when you are using PlanetLab through RIPE and this is RIPE when you are using all the vantage points that are available. We can also use significantly smaller proportion of them. This is the 75% line. And this is the completeness of the numeration of many the Anycast that exist in our ground route, we can find in, by running the algorithm. How many of those did we joined are geographically accurate. I didn't explain to you the ground route. We have basically two different means that I'm going to show you in a while. But, for the time being, what I want to comment also is the fact that whenever we are wrong in the geolocation, then most of the time we wrong by something in the order of a few hundred kilometres, and that means basically a few millisecond. Remember 1 millisecond, 100 km. So here we have something that is very tight. So it's going to be hard to do much better.

So, we release this Open Source, there is software and the data set plus the the software to use in order to obtain all the results that I'm explaining and the kind of measurement campaign that we did were from PlanetLab and from RIPE over two different types of protocol, DNS and CDN, for the DNS we were able to use IMC P and DNS chaos. In the case of CGN, from RIPE we were only able to use ICMP while from PlanetLab we were also able to use HTTP head request which are forbidden in RIPE the potential problem because RIPE Atlas platform could be used to circumvent something in place, so some places cannot access to some content but if you use RIPE in order to access to it you can basically turn the request to RIPE, which is why I understand RIPE Atlas doesn't IPv4 respect request.

So, whenever you do a HTTP request to some known providers like edge cast or CloudFlare, if you look into the headers you are going to find some indication of the Anycast replica that is replying. The same when you do a DNS chaos request, there is going to be a chaos name, not always, but mostly everytime, which is encoded. Also in the EdgeCast and Cloudflare you will find this address. In the case of EdgeCast, it is the classical server header. While in the case of CloudFlare, it is the CF Ray header. By the way, you are able to reliability ground route so that if I can run these two experiments at the same time when you can have what can I do with simple measurement and what can I do if I use a protocol specific technique which is the best that I can do?

Now, what is about latency? So here, I'm showing for different targets, so, for DNS and for the CDN, a distribution of the latency, latency is here and here we got the use, so this map, these things map, this is RIPE versus planet map and here we have a breakdown. So what is the main take away? Here we have 10 milliseconds, this corresponds to about 1,000 km and basically we see that only in about at most 20% of the latency measurement that we have, we have a disc that is smaller than a country. So, which basically means that if you want to have a disc and try to look at inside the disc, how many cities we found. And this is the ‑‑ this is a little contra intuitive. So, which is basically 1 minus the implement re distribution function and the way you read it is basically here in 90% of the cases, we will have a latency which is bigger than 100 ‑‑ sorry a number of air ports, which is bigger than 100. And then if we have more than one hundred air ports in a disc, if we select our and date at random, which is the ASNs of success? It's going to be 1 divided by 100 and for over 90% of the disc our possibility to select correctly one such airport is going to be or one such city is going to be low. Which is why, what we aim is that not only we can consider something which relates to the latency, which is basically the distance from our vantage point, but we also exploit some general information like how many people do live in some cities. For instance in this case we have here a number of cities, the most populated of which are Frankfurt, Zurich and Munich and what our idea we did studies, some extra slides if you want, is that basically if you just go by picking the largest city, which is basically putting ‑‑ of formula, forgetting about the delay, and equal to 1 in this formula, normalised over all the cities in the areas, then you are going to win 75% of the cases. And if you are not winning, then your error is going to be 300 km which is basically you are select ago neighbouring city. So the rationale why this thing works is that user lives in densely populated area, so that picking a larger city is sort of counting on this bias and making a bet and this bet is 75% of the time is successful.

Now, the latency is useful ‑‑ sorry, is ugly. So, do we need to filter a latency explicitly in order to bound the maximum error? So nice property of the algorithm we are using is that it's using circles in increasing size, which means that basically if there is anything useful that we can get from this which is smaller then it will be used. As I say the fact if we are filtering or not filtering, there is not a big difference. You see here we are picking a maximum situation that correspondence to a maximum registration F you are picking this threshold very small, then of course our geolocation is going to be very high, except we are going to be able to geolocate only very percentage of nodes because we have only a few percentage of this disc happen to have a latency not larger than one milliseconds. If we increase this you can see that this sat rates. So the recall improves. The correct geolocation improves but not by much and also the median location area but basically saturates. So there is no incentive in putting in place a filter, if you put a filter then it would diminish the recall. And this is because the aggregate is pretty robust.

The second reason why iGreedy is enough, is that ‑‑ in theory iGreedy should be bringing you five times from the optimum, which means that in theory, we should be able to only five 1 fifth of the Anycast web exist in our data set. But in practice you can see here a comparison of iGreedy versus BruteForce and the utility version of the simple distribution versus the complex BruteForce solution that requires to ‑‑ and basically there is no difference. From iGreedy to BruteForce you might gain at most one. So, basically it seems that there is no pay off because when you move from iGreedy to BruteForce you are moving from 100 millisecond to thousands of seconds to discover just one guy more, which is a cause cost that cannot afford. In practice, good message, iGreedy is good enough.

Something more interesting for the RIPE community I believe is the comparison of what we can do with PlanetLab and what we can do with RIPE Atlas. And especially what we can do by making some careful selection of RIPE Atlas vantage points, because where we do an experiment RIPE Atlas had 7,000 nodes. Now there are 9,000 nodes, so, of course, as more nodes are added to the system, running exhaustively on all the probes just consume credit, but doesn't bring any benefit. Well, the answer is only partly because ‑‑ and then here, I have a number of different subsets. So we are using either all the vantage points that were available at the time of the experiment or we are using a subset of 200 which is selected by requiring this vantage point to be at least 100 km apart or one millisecond in latency terms. Then we have another sub‑net where we also include all the vantage points where true positive classification in our target. So these guys, those are far apart plus those that were found to be getting good response in the previous phase, so we are betting on some guy to give us consistently good results.

And they are applying this thing to the census results. Since we cannot do the census again over RIPE, what we do is we are basically taking the results of the PlanetLab phase and making the measurement again this time from Atlas. And what we do see here is that even from RIPE Atlas we do select a fewer number of probes, then we increase the recall because we can find more. So the gap between these lines is the additional number of probes that we are able to find using RIPE Atlas. Implicitly what we are saying here is that there is a much greater geographic connectivity. If we sample from ladies and gentlemen of the jury set we can find a smaller sub‑net that gives inter performance. However, and this is also ‑‑ I mean if he ‑‑ from the blue to the green the differences between 200 guys or 350, then we expect to bring some advantages because they were already performing well in some cases.

We also test with the union of RIPE an PlanetLab, sometimes even better, sometimes it's not. When you use all the PlanetLab nodes we see there is a curve here and the curve can discover a lot of vantage points. From time to time these are only side effects. So this is about PlanetLab and RIPE. I'm not going to comment much about it, the different study of different class of different grouping. But one of the comments that I want to make is that our technique depends on the availability of geolocation information that we have in RIPE, and the owner of the vantage points that the geolocation and we have seen a lot of very weird geolocation. To be fair we also we found a lot of examples in PlanetLabs, here there is Romanian vantage point that is going hiking in the Pyrenes between France and Spain where the PlanetLab nodes in a reserve. And this is exactly in the middle of nowhere because this is the telephone number of people who put in the geolocation information but it's not even a latitude and longitude. This is making a tour of the earth several times. And this information is to be found in the RIPE data set. So now I have seen that there is an increasing system of vantage point that have a tag that is system auto gee IP city. The question is can we trust is and is it sill something that we can struggling with?

Then I have other slides, I'm not so sure if you can see this, which comments why ICMP is a good protocol to go for. Basically the idea that if you have different services that are run on different parts. With ICMP you'll be able to catch them all. While service specific protocols like DNS will only be able to catch DNS and HTTP probes are only able to catch HTTP services. So if want to do a sensors and you have no implicit assumption on which the services are run, then ICMP is a good choice, except that sooner or later some people will start seeing that we are doing these kind of studies and so they will stop replying to our queries.

Last comment about the probing rate. Our scan over 2,000 hops her second per source and err behaving like good ioshints sense so we avoid overloading the destination, we are spreading all the probes so we maximise the inter arrival. However, the problem is that the reply aggregate so the source basically is receiving exactly 10 K replies per second because that is the amount of things that we send. And some of firewalls are seeing this as an attack and they are going to rate limit it, so they are going to drop all the replies, all of our measurement. So the only way which we found out to, sort this was to slow down the probing rate. Here again there is a 10 factor speed up, that by removing the ICMP firewall filters we could gain from.

This is more a question from you. I had an idea I call it double ping. The idea is that I don't want to make trace route measurement because they are long and costly, but I want to gain some information about the penultimate hop in a penultimate AS path. What I can do is send and ICM M echo reply without within ‑‑ echo request. Whenever I receive an echo reply package I can send immediately a sequence an ICMP echo request ‑‑ by limiting the time to leave this time by the number of hops that I'm expecting, so that I reach only the penultimate hop which is going to be something like the power of two, so the power of two, which is TTLX minus 1, to avoid hitting the destination being one hop away from that. My question for the technical experienced audience is do you expect that we can get any useful information for the penultimate AS hop in the path for doing this kind of opportunistic and limited trace route which is just by tweaking one single packet? We haven't had any chance to do it in the while but before doing the census with this technique I would like to know if people expect this thing to work.

My time is over, I won't repeat the conclusions which are the same in the previous talk. So, one additional question for the audience is and I think the Chair are more interested in is, was the experiment useful or should I have stopped at the first movie because the second sequel are always worst than the first movies, with few exceptions? So, thanks everybody.

CHAIR: Thanks Dario. So, questions, comments in general? And the people who had the pleasure to see the original session in the Plenary on Tuesday, was that helpful today for them? Did they get more insight? Is that what we probably want to structure but in the future, but that's probably more actually forth survey when you rate, if you want to have more technical parts from a Plenary session, but Robert. Geolocation?

AUDIENCE SPEAKER: Robert kiss techie, RIPE NCC. First, a kind of an answer to your question, the last one, we do have data, you can look into. We you can get the traceroutes that we have run on using ICMP and see what the answer would be to your question in those cases, I'm not saying that actually solves your problem but there is at least data you can look at that will give you an idea.

The system auto geocity thing, we use that when the host doesn't explicitly tell us where the probe is and in practice we are asking MaxMind what do they think about the IP of the probe? And if that happens to be a city level thing, then we apply that tag if that happens to be a country level geolocation then we say system auto country. We have no idea if that's correct or not but that's the best we know. And the reason why we are using systems tags is to let the users know as in everyone else that it is not a proper geolocation given by the user themselves. If the user does give the geolocation then we remove the automatic thing and just use the users said.

We understand that some people are, let me put it this way, wrong when they geolocate their probe. Intentionally or unintentionally things change, probes move. People forget. So far our understanding is that this is not a big problem. It is a problem especially for types like Emile who wants to have proper geolocation, and he has to constantly filter out the noise. But it seems like more and more people are hitting this issue, so I think we'll take it up as a task to try to structurally identify who are the unlikely geolocations.


AUDIENCE SPEAKER: I want to thank you for your research and it's very interesting, and especially it has interest nor city and ‑‑ which want to know where they should put their nodes and for researchers who actually find Anycast nodes. I should admit something to you, that actually the limits of, the limit of latency is not limited to the latency of the fibre and so on. In most cases, we should make, as far as possible, probes from different locations, and as far as we get results, we should analyse what hops we actually channelled before. The main problem in Internet in analysing the geolocation is that latency is really, really arrives from ISP to ISP. We have a lot of queries, we had a lot of middle nodes which actually add significant amount of latency, and this way what you do actually should be scaled to more than 10 nodes toe get some more interesting result, nearer than kilometres than 100 km in radius. Thank you anyway.

DARIO ROSSI: Just a quick follow‑up. I agree with you that latency is not good, which is why what we're saying is oh, please do consider latency only as initial guess, which is basically saying look, you need to look at inner circle of the probe and the probe is probably going to be here in Studgart, and anywhere in this region, there are places of interest. But then we don't look at the latency any longer, and then we are just picking the biggest city and as you can see from this plot, according to the ground route in 75% of the cases you're right, and then you're wrong, that depends on how big was the circle, it can be as big as 1,000 km and it's pretty bad because latency is bad. So, I agree with you. We do 100 measurement and then you pick only the most reliable one.

CHAIR: Good. Thanks a lot Dario, now you are relieved.

Next one is Vesna. What would be an MAT Working Group session without the RIPE Atlas update?

VESNA MANOJLIVIC: Hi, I'd like to leave as much time as possible for discussion, so this is my personal challenge, this is going to be the fastest presentation ever.

I am sure everybody has heard about RIPE Atlas, so in place of introduction, our new Wikipedia page, so you can read all about it there and since it's Wiki, you can also edit it yourself.

So, I'd like to cover some growth. So we had more probes, more anchors, more users and more measurements this year. And the graphs are also available on our website. We cover many countries. These are the top ten.

And we just had a very successful hack‑a‑thon which was powered by Scoop Waffles, these are the traditional Dutch cakes that were explicitly requested by the hack‑a‑thon participants.

So, I'd like to point out to the ambassadors, this room is actually full of them, so thank you for spreading the love and the probes all around the world. You can see on our website the conference is where the ambassadors will go next. A lot of people have asked for the multi‑lingual documentation, so information about RIPE Atlas in different languages. Now there are some of them on GitHub and if you know of other ones, you could put them yourself or let me know about it and I will upload them. And we have another webinar in two weeks, if you'd like to recommend it to somebody who needs to learn about advance RIPE Atlas features.

We are grateful to our sponsors who are supporting us financially, so that we can buy more probes and that we can organise hack‑a‑thons, so we already have one for next year, and we are always looking for more, and your award will be that your company's logo will be shown on these slides next time and the warm fuzzy feeling that you are actually doing something for the good of the Internet.

The same goes for the hosts of the RIPE Atlas anchors, there is more than more than 100 companies because some organisations have more than one RIPE Atlas anchor and their logo is also on our website.

Finally for the new features. So this is the one that I'd like to highlight. This time, in our community, there are some people who prefer the shiny visualisations and interactive web based interfaces, and then there are others who like the terminal and the command line tools and the traditional way of scheduling measurements, so finally they can do it, we have command line tools, they are on GitHub, take them, contribute, rewrite them, forward them and during hack‑a‑thon, we managed to open BSD and we are working on Fedora. So if there are any Delvian developers, please talk to me after this session.

Other new features, read this quickly because I want to move on.

So, you asked for it and we implemented them.

Then, this is your homework. So click on all of these links and read the articles that have been published about how people are using RIPE Atlas. Mostly, it's about DNS or IXPs, and there is a lot of research papers too.

So, what we are going to do more or less by the end of this year is MON. So this will enable you to monitor your second level Domain using RIPE Atlas. If you are a SLD operator we will be adding more MON zones. Go to the DNS Working Group and take part in the discussion we have suggested. We have suggested the RIPE document for the criteria for adding these zones and it's now in the hands of the DNS Working Group community.

And this is the question that we actually have for you. There is an article on RIPE labs in which we actually put a lot of detail about so if we have some time I'll get back to this. A specific question that has been asked at the last RIPE meeting is what are we going to do with the pilot RIPE Atlas anchors? There is only 15 of those, so it's very specific and these are the suggestions that we came up with, we can talk about it later on.

So, I'm very happy to announce already we have two hack‑a‑thons planned for next year. The first one is actually not organised by us but we are helping out a lot. CAIDA to organise the BGP hack‑a‑thon in San Diego and DE‑CIX has invited us to their offices, they will co‑organise the hack‑a‑thon with us with the topic of internet exchanges and measurements and so, talk to us if you are interested and we will be posting the call for participation later on.

How can you take part? These are content details, and now going back to the actual question ‑‑

So, please tell us, what do you want us to do? We really want to move on with wifi measurements and basically we have many options. One is we keep the old probes, the current probes and we enable wifi on them. The other option is that we distribute two types of probes: The old ones that can't do wifi and the new ones that can do wifi and the other option is that we stop distribute can the old probes and only distribute new probes and the new type will be wifi enabled, or there will be opt‑in or opt out. So there are so many options and there is a poll on RIPE Labs but it's more important to understand your needs or requirements. So now you have how many minutes for this discussion ‑‑ well, at least two ‑‑

CHAIR: We have some time for discussing that. Do you want to ‑‑ do you just ask the question about the wifi measurement? Or will you ask the others as well afterwards?

VESNA MANOJLIVIC: It's really a short time so please find us, to me and my other colleagues from RIPE NCC from the RIPE Atlas team during this RIPE meeting and talk to us and explain why would you prefer one or the other of the options.

CHAIR: Do we already have some people having an opinion about the wifi question in the room?

AUDIENCE SPEAKER: For the record, so far, about 100 or so people filled in the survey on the RIPE Labs article, so for the minimum, please go there and click.

AUDIENCE SPEAKER: Daniel Karrenberg, like, chief scientist. It's really important to, not just talk about wifi measurements. This is not just switching on the proposer here is not just to switch on the wifi modem on the probes that can do so and snoop around. This is a very specific, narrow proposal to measure a specific SSIDs and I think it's really important not to go away from this meeting thinking that we're in the snoop around in the wifi band business. So, if you are really interested in this, look at the specifics and don't think that we're sort of random ‑‑ that the proposal is to randomly just listen to what's out there. That's not the idea.

VESNA MANOJLIVIC: Thank you for clarification. And apart from very technical specifications of the measurements, we, as a community, also have to think about the logistics of how are we going to then deal with users next year when they ask, oh, can I get a probe that does not do wifi? And if the answer is no, are they going then to say well thank you, I don't want to use it. Or, if people who already have a probe, we have more than 10,000 users, if they say oh, I have the old probe, can I also have a new one so that they will have two in the house so that one can do the wifi and the other cannot. So these are all the questions that we have and we would like your help with the answers. In the end, we will make a decision and explain the decision to you and from then on we will have to all deal with the choice that we have made. But, we really appreciate your help.

CHAIR: Will the survey you mentioned, will you publish the results and the comments on the mailing list?

VESNA MANOJLIVIC: Yeah definitely. Yes.

CHAIR: Good, well then, thanks a lot. Which brings us to Sacha. Sacha was the winner of the hack‑a‑thon and he will talk about what happened there. And what he did.

SASCHA BLEIDNER: All right. Thanks Vesna for being that quick, I should apologise first, actually as you can probably hear I lost my speech this morning, probably because of the party yesterday. So I still want to try to give you an impression of the hack‑a‑thon, we had a team which actually won the hack‑a‑thon, it wasn't just me, it was a whole team of four people.

Our project is called Yin‑Yang, and NinjaX tracerouting. It just sounded cool so it's more about the Yin‑Yang style. This was the Yin‑Yang team, as I said before, four people, I should also mention the RIPE people who were there helping us and especially Emile from the RIPE NCC who got us with good input for this project.

So, there's actually us in the picture and also the price. But I want to leave out the price for the moment. We will see afterwards what this actually was.

So, our motivation of this project was the Internet is actually to date two‑way, so most of the communication which happens there is two‑way, so basically TCP and all the protocols above that. But if you want to troubleshoot anything, so imagine you're an operator and you have trouble going to a certain destination, you can fire up a trace route from your location to a certain destination, you probably also are interested in how does the path look like from certain destination to you. So this is then two rounds. But then you would need to control the destination basically to be able to have a trace route from there to you.

So, as you heard, there is this tool out there called RIPE Atlas with a lot of probes, so we facilitate RIPE Atlas to ‑‑ just did a Hack in order to get a two‑way trace route. What we did is ‑‑ so what the tool actually does right now, you give it two AS numbers, so the source AS is probably your AS and the destination AS can be an AS which you have trouble to connect to. And the tool will then find a probe in each of those ASs, and will start two traceroutes, so will do a trace route from the source to the destination and then from the destination to the source back. So then you will end up with having a path from the source to the destination and a way back.

The step on here, for the trace route you will have the IP addresses of course, so, we get all the IP addresses of the trace route, and we get out to peering DB to actually search for IXPs on those paths to just have the information available if there's an IXP on a certain path, and for the rest of the IP addresses we just go to RIPE Stat and asked RIPE Stat for the ASN number so then we aggregate the path not to see the IP addresses but rather have the AS path available. And in the end we just draw them for the users to not just have a CLI output but you have a like a cool ULI to have a visual feeling of what's going on there.

So, let's see, those were the first steps. If you can see the time up to the right. It was already Sunday, so the Hack‑a‑thon ‑‑ this was Sunday at twelve o'clock, our visualisation, it looked like this, pretty simple but not for the winning team. Let's look to the final results. This is one example we did. Here we have proximate add in France going to Bannf, which is in Sweden. With did a lot of traceroutes to Sweden, as you can see here is what happens here you have these two ASs, they interconnect the two IXPs. The path is as a metric as usually the Internet, as a metric going over different IXPs. So the solid line is the forward direction and the dashed line is the reverse path. And as you can see, it goes over two IXPs actually.

Here now could you say all right this is just two traceroutes and what else is there? We got some additional feature to enhance this a little bit. You could ask the question as an prior prior, path to ascertain destination, but what about my neighbour? When I say neighbour, I mean a neighbour in data centre or even a neighbour rack in just one data centre. What you can see here is a map from Frankfurt and these green figures are data centres and you can ask what about the other ASN close by and close by means close by in a geographical location. So what you can do is ask a tool, okay, please do an additional trace route from an additional source which is geolocated nearby the probe you just selected as one. So, this is what you can see here. So there is a trace route from S1 which is in Denmark, going to this hosting company which is not easy to spell, right, in Turkey, and as you can see there is a lot of difference in those paths, so, but even though that reverse right different ASes in the end if, if look at the RTT because we got the RTT of the full path availability of the trace route, their route is similar. So even though you are connected to other ASs on the path. You can say my neighbour is doing the same so I can stick to my provider.

I want to do another example. It's the same one as before with the two IXPs but this time we have the additional source in here, so you can see that the orange path, like the source 1 is still asymmetric going over two IXPs and this blue one joining in from S 2 by guess over Frankfurt and we now look at the RTTs. The blue one is doing better. I don't want to say this this is just because of DE‑CIX, I don't want to say it's because of DE‑CIX at all. I kind of want to remind you that we're here looking at ASN, so we actually have the information of the hops, but cannot see them, so you don't know which, like, how many hops are in which ASN. So, it's not only about the IXPs here.

Coming back to the road map. I mean, this was just two days of hacking stuff together as fast as possible, as you saw before the visualisation was pretty rough in the beginning. So, the road map, picks support we can strike this out because it's already there, luckily RIPE Atlas is fully IPv6 supported. So we can match and map the IPv6 addresses as well. We would like to add support for probeless networks, so if you are the source AS you can probably get a probe in your AS and do this cool stuff. But if the destination AS doesn't have any probe, you are pretty much stuck. So you could try to find a probe close by, the location you actually want to test and then go for this probe and see how this goes. Even though it will not be a hundred percent the same, right.

We want to get some useful CLI output. If you go on to this project you will see there are outputs, but this is just debug messages we understand but you probably won't. So we will put in some more useful stuff there.

And we also want to support even more sources and destinations at once, so you could try to imagine you could also like compare IPv4 to IPv6 paths here instead of just doing the different locations. And I should mention this road here is in Romania, because we were in Romania we thought if we do a road map we should include a road which is here, right.

So coming back now, this is the winners, which is us, actually I'm the only guy here left. All the others are at home probably in the pool or somewhere. So, that's why I'm giving this presentation. And now I want to reveal the secret about the prize.

So, as Vesna was saying the hack‑a‑thon was all powered by Scoop Waffles so even for the first prize it was a box of waffles. We also won a lot of bug fixes as you can imagine the last two days were full of doing stuff, getting things better together, and of course, like the really first prize was this presentation, so thank you very much for having me here.


CHAIR: Thanks a lot. Any questions or comments for Sacha? Otherwise, grab him outside in the hall. Thanks a lot and congratulations again for the first prize.

Good. That brings us to an end for today. We were, this time, more on the heavy side when it comes to MAT, some of that was certainly above my head, or when it came to analysis and statistics. So it would be nice if you can give us, Nina and me, some feedback when we selected the talks, if this is the level of heaviness you want to have, if you want to have it lighter, if you are more comfortable, you are fine with it and this is what you want to see in the Working Group. So some feedback in that regard would be appreciated.

Otherwise. Thanks a lot for coming. And see you next time, hopefully, in Copenhagen. Bye‑bye.