Archives

BRIAN NISBET: I think we might ‑‑ hello. Now is a good time to end your conversations and take a seat, please, as we are going to start the afternoon session.

There are many seats you can take and...

So, hello and welcome, we are already into the second session of RIPE 71. So, this afternoon's session will be chaired by myself, Brian, and Leslie. This is your obligatory once every five minutes reminder to rate the talks you will be hearing this afternoon, because Amazon vouchers, books, they are lovely and apparently you can buy other things there too. Other first talk is on scaleable high speed package capture using Openflow and Intel DPDK, which I believe is a reasonably strong contender for longest title of any presentation at the meeting, so I would like to introduce Wouter de Vries from the University of Twente.
(Applause)

WOUTER DE VRIES: Hello. So, who am I? I am a PhD student at the University of Twente and design at the communications systems. The work that I am doing is mostly funded by SURFnet as well as, they also provide the hardware that I used for this particular project. So, moving on.

So, why do we want to do high speed packet capture? Well, for example, there are still very large DDoS attacks and mitigation remains hard and our group is interested in doing analysis of those attacks. Obviously, as the attacks grow it becomes harder to capture them and then to analyse them. Other uses might be intrusion detection, for example, or monitoring, maybe you want to start your own NSA. I don't think that this project would be particular good contender for the NSA itself, but who knows?


What is the problem specifically? The the total bandwidth of the Internet is increasing. How much is it increasing? Sisco sew official networking index 2015 provides us some insight into that, as you can see, 2015 we have 72.4 gegabites per month, in approximately three years that figure will be doubled.

That is quite a high rate of increase, I would say.

So, in order to be able to keep analysing real world traffic our capture methods also need to evolve and speeds in excess of 10 Gbits per second things start to get quite difficult. For example, if you are transmitting data at 10 and you use the smallest packet size there is, which is 64 bytes then you will transmit in excess of 14 .8 million packets per second. That means that you only have a few clock cycles to process each individual packet and you need to store a lot of data, 1.25 gigabytes per second.

So what is our goal then? A scaleable system that is able to capture and generate packets at a high speed, which we define as arbitratorly, more or equal to 40 Gbits per second. How are we going to achieve this? The answer is, is two parts. The first part is use DPDK which stands for data plane development kit, to maximise the single machine performance. Using DPDK you can develop high speed packet processing, applications on more or less commodity hardware. And then we will use OpenFlow switches to distribute traffic over multiple machines. So what does that look like? It looks like this. You have an overflow switch which is receiving 40 Gbits per second, you define some flows, which cause the OpenFlow switch to direct the 40 Gbits of traffic to four different capture machines. Each of these machines stores the data, and once the attack is over you can shut them down, collect the data and recombine it into a single capture.

So, what is DPDK? A data plane development kit doesn't really speak to the imagination, I would say, but it is a library for fast packet processing and one of ‑‑ a couple of its main feature are zero copy, it allows the network hardly to directly copy data to the memory of the machine without using, utilising the CPU so that is able to do different things while the network data is streaming in. That is called direct memory access.

Then it provides fast and safe implementations of ring buffers or normal buffers making development of multi‑threaded applications much easier. You will know that doing things that is safe and handling con currency is incredibly difficult and very hard to get right, so having a platform that provides this for you makes things a whole lot easier.

Thirdly, it has been designed from the ground up to support multiple cores. The idea is, the philosophy is you use a single thread on each core and it does one specific task so takes something interest a buffer and transforms it somehow and transited it somewhere else or not, depending on your application.

So, using DPDK, so we want to capture 64 byte packets which is the worst case scenario in some sense, at 10 Gbits per second, 1 DVD every four seconds, in PCap format on commodity hardware. 1725 is quite a lot of data, if you know that conventional hard drive can store about 150 megabytes per second if you are writing to it sequentially, you would need about 9 or so of these drives working at peak capacity to right all this data. So we introduced compression into the tool. Here are a selection of compression methods, that is graph that shows the time it takes to compress the Linux kernel to a ram disc, comparing compression efficiency ‑‑ compression time efficiency. The time is it provides quite a nice compression ratio. There is actually an even better one called LZ4 which we forgot to look at that basically.

So what then are some intermediate results that we managed to achieve with DPDK? So using this compression and specially crafted 64 bytes packets you can capture on a single conventional hard disc drive using only three cores. Now these are three cores is a bit arbitrary but is state‑of‑the‑art. Still it's quite a nice achievement and one thing to consider about the compression is that it works best actually with smaller packets because small packets have large amount of data that is repeated in each of them because the headers, IP address, source address, etc., so with bigger packets the advantage will unfortunately decrease so you always end up needing a rate.

Handily enough, generating packets at line rate, which is 64 bit packets again is possible using only singer core. So generating something which is copying packets from the memory into the network cart is quite easy.

That is for DPDK.

So now we want to use OpenFlow to separate a stream of data to multiple machines. OpenFlow is a method of, well, let me just read what it says here "it allows direct access to and manipulation of the forwarding plane of network devices such as switches and routers". That is very nice. What does it look like? This is an OpenFlow switch, it has a secure channel with a controller, the controller decides where each packet goes and it influences the flow table. And the flow table is in the hardware. So this means that you can customise the way the switch handles each packet, in theory.

So, we need to define something that we can split the traffic on because you can't just say, OK, you can't say to the switch just give me one‑fourth of the traffic on each port and start doing that. That doesn't work that way. You have a couple of candidates option: For example, you can use the source UDP port in the data we analysed is fairly random, assigned by the operating system. So you can put a mask on that. The OpenFlow specification doesn't specify you can put a mask on it it is implemented in open V‑switch and the problems that there is in OpenFlow, the deaf mission is so wide that the implementations vary a great deal. Another candidate is IP address where you can put a mask somehow on an IP address, switch and that and the last bits decide in which bin it will go and which port the traffic will be routed. And finally the best contender is the equal cost multipath routing algorithm in each switch. That is used when, for example, in BGP you have three routes which are each have an equal cost so they are each the best route in some sense. Then by combining those routes you can achieve a higher bandwidth.

So let's look at the performance of these candidates. So if you look at the UDP source port of a random DDoS attack that we have gathered at our university, and you apply a modulus to the UDP source port which means that you can't do a certain number and you start over again from that number, and you use the remainder to decide to which port it goes, then you can get this graph, and what this shows is the standard deviation on the left‑hand and on the right‑hand, you have the relative standard deviation. And in order for each port to get an equal amount of packets would you like to have the standard deviation and the relative standard deviation as low as possible. Now the standard deviation doesn't say much if you don't know what the average is and the relative standard deviation does, so as you can see here if you split the UDP ‑‑ by the UDP source port using modulus 2 which is equal a bit mask on the last bit, then you have a relative standard deviation of 2%, which means that the relative ‑‑ standard deviation says something about the deviation from the average for each value that you considered. So that means that ‑‑ only 2% difference for each corrector, that means if I have 38 gigabits of traffic, then each port will probably be able to handle the traffic they get because not one of the ports will exceed 10.

So this is quite nice. Unfortunately as I mentioned, UDP source port bit masks are not possible in OpenFlow specialication themselves but in some ‑‑ fending on the vendor your mileage may vary.

(Depending)

So this is what such a flow table may look like. You put ‑‑ this is slit in four, for four connectors. You put a bit mask on the last two bits and that is equal to a modulus of 4 so if you go back then you see at the modulus of 4 we have approximately 3% of deviation. So in this case, this would work out quite nicely.

So, what if we look at the last objecting at the time of an IP address because that is implementable in every OpenFlow switch because it's required by the specification. You can probably already see this graphs looks somewhat different than the one I showed in the sense it goes all over the place and the important part of this graphic that the relative standard deviation goes up to almost 100%, which means that some parts will be getting almost all of the traffic while others will be getting almost none, so in this case it wouldn't really work well for load balancing the traffic.

Then, the way equal cost multipath routing is implemented in switches or is the recommendation of OpenFlow is to take a hash of some field in the connection and use that to assign it to some connector. I have taken the MD 5 hash here but that also depends on the implementation by the vendor and the OpenFlow specification says you need to implement some type of and which way you do that is completely up to you. If you use an MD 5 hash of the IP address you get quite nice results. The deviation is not 1 or ‑‑ 1% which is cleatly usable for use case.

All right so. This equal cost multipath routing is the most promising, actually, to work and it's will the the most generic so as well work for DDoS attack as well as for other traffic. Unfortunately, the ECMP use if you turn on equal cost multipath routing we start to lose 90% of all packets. If you want to avoid packet loss that is probably not a great way to start.

So, the ECMP algorithm is a very good match to to a problem but the implementation is not a very good match, which is a contradiction, but still. What is the current status, something we are still working on, hoping from the expertise in this room I will get some ideas of thousand further pursue this. For some types of traffic splitting is easier than he is others. DDoS comes from so many attacked sources and the packets are small so you have small flows. We are still working on finding a generic way to balance flows, ECMP is great if overflow vendors start implementing that in a good way we are done now. Unfortunately, in the help we had available it was not working well.

So what is the conclusion of this: So DPDK the data allows line rate packet capture on 10 Gbits per second and I want to stress how relatively easy it is if you are a software engineer to design fast packet processing applications with this. They really provide a lot of libraries that you can use to implement this application and the development of it is also going quite fast. They are also working with Mel Knox now and providing their own drivers so DPDK will also be compatible with so we tested this as well with the Mel Knox connect S 4 which is 100 pun gigabit interface and it works almost out of the box.

So the OpenFlow compatible switches have the potential to scale the capture speed horizontally, if the implementation is further refined, so better follow the specification because the specification of OpenFlow has some recommendations but following that is completely up to whoever is doing the implementing. If we combine this then we are able to catch it more or equal to 40 Gbits per second.

I am a big fan of OpenSource stuff, you can find this tool here on this URL, and using it as easy as compiling DPDK and running in this DPDK CAP, so try this at home if you want. That is it for my part. Thank you for your attention. I would like questions and/or comments now or off‑line, whatever you want.

CHAIR: Please remember to say your name and affiliation.

AUDIENCE SPEAKER: Bengt Gorden and I am from Resilans, Sweden. I just wonder about the synchronisation about the packets. Do you get packet reordered? Have you have them in sync?

WOUTER DE VRIES: We are still working on the reordering of that but one packet of the network cards it is able to at a very precise timing to the packets before it's handed off to the start in the memory of the CPU so if you use that timing then it should be possible to reorder them in the way the same as they came in. Depending on the implementation by the OpenFlow switch that may still cause issues. Yes.

BENNO OVEREINDER: Just a question from me ‑‑ well better understanding. And not exactly ‑‑ ‑‑ ECMP so my understanding with OpenFlow and with ‑‑ open light control or something like that?

WOUTER DE VRIES: Open daylight.

BENNO OVEREINDER: It's not something you can implement yourself ECMP with OpenFlow, why wait for vendors?

WOUTER DE VRIES: Yes, it is possible, but there are two sides to that: On the one hand, if this ECMP is defined by OpenFlow and the vendor should implement it, why should I bother with implementing it myself, I can wait until they fix their implementation. Yes, of course it's possible, I think, to implement this yourself, but you have to be careful to not saturate your link to the controller because if you have enough flows you will eventually DDos your own controller of your OpenFlow switch so that is something to take into account, but the ECMP algorithm is not fixed quickly enough we will have no choice to move in that direction that you mentioned.

BENNO OVEREINDER: Thank you.

LESLIE CARR: Any more questions? I can run to the sides of the room if anyone holds up their hand or if you want to try the microphones again. Well, thank you very much for an excellent talk.

(Applause)

Next up we have Pavel Odintsov from FastVPS speaking about OpenSource FastNetMon.

PAVEL ODINTSOV: Hello. I am Pavel Odintsov from FastNetMon.

First of all, I want to share some stats about DDoS mitigation in our company. On this slide, you could find number of DDoS attacks for last six months. As you can see, we have no dynamics in these graphs, no growth, no solution for DDoS attacks. D docks attacks become our day‑to‑day job.

As you can see, in many cases when we discussed DDoS attacks we are thinking it's DDoS incoming but from data centre site I could highlight part of this because it's also outgoing, and because each attack should be brought us into places.

As you can see, we have significant amount of outgoing DDoS attacks and because we have very big amount of customers they have ‑‑ security problems, but they ‑‑ but we handle these attacks very fast and try do not impact other sites.

There are most important protocols for incoming and outgoing DDoS attacks. At TCP and UDP. For incoming DDoS attacks we have significant amount of UDP traffic, and TCP attacks are not very popular in our case. I couldn't find any description why. I think it's related with security of our network and DDoS attacks ‑‑ UDP much more widely ‑‑ popular in our networks.

Outgoing DDoS attacks have significant different distribution. It's related with incoming BCP search 8 deployed to our network but actually this don't mean in our case because most of these attacks types are very, very dangerous. So, when I mention such attacks per month, we could count a single attack per day. So we have round the clock ‑‑ our network and monitor, check it, but in some cases you could ask why couldn't handle all of these attacks manually, definitely we couldn't. I have mentioned it on this slide, I recently called DDoS attack and please take a look on access. We have 10 Gbits per second in two minutes, nobody in this world could not handle these attacks manually. We could get monitoring notification about DDoS, we couldn't target of these attacks but we couldn't mitigate it in time, and we will ‑‑ it down.

10 Gbits per second, it's not very good description of this attack type, could handle it without any problems, but actually you have about 4 million packets per second, and top rate at switch or router could be broken with this amount of packets. Actually, we have a ‑‑ process DDoS, it couldn't mitigate it without automation. Let's automate it. So we have two big markets on DDoS mitigation. There are two big pieces of it. We have hardware solutions there are many well‑known vendors who provide DDoS filtering boxes and we have Cloud solutions but both of them have significant problems. First of all it's very costly. I couldn't ‑‑ I could by tons of switches, I could buy top grade of routers but I couldn't help every single D dot fittering box. So it's very big problem but this is not first problem. There are ‑‑ I have mentioned multiple problems of DDoS equipment, they are too expensive, they need dedicated network engineer. I have experience with multiple prints and I could share my experience. They need separate engineers, and actually, for round the clock we need engineers who take a look at the graphics and monitoring from these boxes. We could not handle these attacks automatically. They need significant changes in network. They need separate places, they need changes in monitoring protocols, they are not fully automated. You couldn't install these boxes and solve problem of DDoS. And, where it's important issue here is arising amount of (rising) amplification DDoS attacks and might saturate your up links and you could not filter any traffic in your network because your links other used. So Cloud security companies could mention us at a good alternative for expensive DDoS filtering boxes but actually they are not, as they increase latency but if you live in Europe or United States it's not a problem. If you live in South America or Australia or China or South Korea, it will be a huge problem for you. They have significant reaction time. If you call an attack in your network you need some time for traffic redirection and then I have average numbers for this process, and it's about 4 in ‑‑ 3, 4 minutes, but in case of mentioned earlier attack, your network will go away. They still need some traffic diversion. There are some companies who offer service for traffic filtering all time but it's very costly and will use it on demand but for on demand DDoS security you need some tool which could redirect your prefixes to filtering Cloud. But actually from DDoS companies broke the Internet, really. You could ‑‑ you could install ‑‑ BGP for high availability. You could have multiple backup channels but could you not back up your DDoS company ‑‑ filtering company. I hit this issue multiple times in my experience and it's a really big issue because my side work OK, my uplinks works really awesome but DDoS filtering company go away and I have broken my SLA and my customers really unhappy in this case.

Let's introduce solution for all mentioned these problems, it's a FastNetMon, OpenSource DDoS mitigation, it's developed in ‑‑ and we use it for about two years, we could guarantee 200 gigabits per second for traffic monitoring and mitigation. So, what we could do: We could say NOC's sleep, everybody care about customers and about SLA protection. Who like go sleep? I like good sleep and that is why we developed FastNetMon, it works throughout automaticly, you sleep and DDoS attacks mitigate it.

We could detect any DDoS attack for channel overflow or equipment overload in few seconds. Really, we could hit this attack in really short time period. We could block wall host in our network or we could block only malicious traffic of attack. We could save your network, your up‑streams, routers, servers and you could actually save your SLA.

So FastNetMon is network monitoring solution and it's developed with all modern traffic metring standards, with sFlow 4, sFlow 5, with NetFlow 8, IP fiction and span ‑‑ capture port mirroring. I prefer sFlow and port mirroring as most reliable and fast options for traffic capture.

So, you could compare time for different traffic captured back ends. In NetFlow we have very big delay in DDoS detection because data from routers arrive to us with some significant time delay, it's about 30 seconds, in most cases. SFlow and mirror haven't any delay and could not ‑‑ it could detect traffic ‑‑ DDoS attack in few seconds. So, we have really, really wide list of supported distributions, we have really big community which help us to test tool kit on different platforms, we have CentOS ‑‑ upstream ports and we have supported Cloud router project.

So how we could block had attack? We are speaking in your language. We are not speaking in English. We are speaking in BGP. I don't know any other languages in BGP which could be brought by any equipment, really any engine, software, we are using BGP before. We are using extended version of BGP, we use BGP flow spec, this is valid for DDoS mitigation. We could do all tasks for traffic filtering on Juniper, Cisco or boxes on line rate without any problems. So actually, you could do not use BGP or BGP flow or or you could send mail with custom script or you could even pull ‑‑ with HTP protocol. So we have really wide list of supported vendors, Cisco, Juniper, extreme, Linux and many else, you could ask for our community about some boxes and/or you could check it manually and share your experience.

So, mirror capture, this is my favourite because we could use deep packet inspection, deep analytics in this case because we have wall packet bail out, we could find so much anomalies in this traffic and mitigate attack in more reliable way. In this case not all capture created equally. So PK is really slow. Option for PKAP, PF ring is not for OpenSource software, it's OpenSource, really, really fast you it could handle about 40 Gbits per second without any load on to your CPUs.

So, let's talk about installation. It's a really simple, we have Perl install script that could you use, stable version or could install GIT version, significant amount of new features but could you ask why we haven't yet ‑‑ we have really big amount ‑‑ number of supported distributions and we could not build binary packages for all of them. And we are developing too kits very fast and that is why I prefer installation with this script.

So configuration is very simple. We have a single configuration file and we have central log file. If something went wrong, you ‑‑ so, because in sFlow and NetFlow we could not detect traffic direction correctly without any external data, so you need to specify all your networks in separate file, networks list.

We have supported tool kit with subnets up to /10 and this works perfectly.

So, DDoS notify script, it's simple script, gives a few options. We have call tool kit urban unbind and when we collect significant amount of attack traffic. So as you can see it's very simple. Attack detection configuration is straightforward. You could enable sFlow, NetFlow, you could specify one time for attacks and threshold for megabits per second and flows per second. Actually starting up, since this moment you could, in case of any DDoS attack you could get this report, this report have big amount of information about this attack, attack, number of packets, packet distribution and many else. So, core algorithms is very simple: We count number of packets for our network. So, we have one magic thing: DPI. Could attack any amplification attacks and we could detect SNMP, DNS, NTP, SSDP attack types, it's very powerful engine and works really fast.

So, we have awesome community and they contributed to this awesome graphics because we have internal stats engine in FastNetMon so they about traffic and attacks could be exported in graph one or in flags D P. If you need some help you could use mail list, could you follow my Twitter or join to IRC channel or write me mail directly. Thank you so much.

(Applause)

BRIAN NISBET: So, yes, indeed, questions.

AUDIENCE SPEAKER: Thank you for for very interesting presentation. A quick reminder, there are no such thing as silver bullet, it's very nice perspective but most likely it's infrastructure only, what do you with application layer attacks.

PAVEL ODINTSOV: So we use DPI for this case, and we want to offer this in next few months so we could ‑‑ attack attacks on DNS http and sometime could attack spoof packets.

AUDIENCE SPEAKER: What is the difference between mitigation and detection? Can you state that.

PAVEL ODINTSOV: So in some cases mitigation and detection could use attacks as same, so in our case we use detection in first case. But mitigation is on Juniper software but ‑‑ or Cisco software for filtering, so we offer mitigation with help for external devices, actually.

AUDIENCE SPEAKER: So it's not a part of your solution?

PAVEL ODINTSOV: Yes, definitely.

AUDIENCE SPEAKER: Generally speaking do detection and while we are at it, talking about detection, it's OpenSource everyone can see your detection method and if I can see how you detect ataxic I can be really low profile and don't trap your threshold so my attacks are never detected but still cause an outage on your network, what do you think about that?

PAVEL ODINTSOV: Actually, we have lower support in core of engine and you could wrote custom mitigation scripts for toolkit and detect any type of do not attacks and do not contribute your changes to upstream. So it's like script.

AUDIENCE SPEAKER: Awesome, once again, thank you, it's really good start. Wrong route to go but it's free, after all, and this is great. Thank you.

PAVEL ODINTSOV: Thank you.

(Applause)

JEN LINKOVA: One question on your algorithm. You said you can't look at the traffic for individual /32 which means for v6 if I use privacy extension from your /64 I can generate a lot of traffic which you probably would not notice, right?

PAVEL ODINTSOV: Unfortunately we haven't support for IPv6 now.

SPEAKER: I am so disappointed.

PAVEL ODINTSOV: Sorry. But working hard on it.

AUDIENCE SPEAKER: I have a question about usage of some accelerations on your solution and which algorithm do you use to detect which type of attack is coming on into your network and if you use like GBU acceleration to match packets to send to make it faster?

PAVEL ODINTSOV: So we have some ideas about the solution but we haven't support from hardware vendors and we haven't network cards and offload engines for this project.

AUDIENCE SPEAKER: I think that is making it little bit flexible ‑‑ making algorithm flexible you will be better to mitigate because DDoS is actually never ending fight between attackers and network ‑‑ protecting from attacks. Thank you so much.

BRIAN NISBET: I am looking several kilometres in each direction to see the mics. Are there any other questions? If not, thank you very much.

PAVEL ODINTSOV: Thank you.

(Applause)

BRIAN NISBET: So, the next session of this section is actually piece on the NRO NC election. So, there is one additional member from the RIPE region to be elected to the number resource organisation, and we have three candidates, so what we will be doing over the next ten minutes or so, is giving each of them an opportunity to address the plenary session. So, on the assumption ‑‑ I can certainly see two of them. Is Dimitry in the room as well? Yes. So we are going to go in alphabetical order, no more than two, maximum of three minutes each please, to present, and we have timers and electric shocks and things like that. So yes, in alphabetical order, Dimitry, please.

DIMITRY BURKOV: Thanks everybody. I am pleased to see you all, I am not sure how many of you remember me, I try to make it to most of RIPE meetings over the last few years and I have been going to them since my first in 2007. I was born in Eugene and also lived in United States. I was one of the first ISP people in Ukraine, I am domain illustrator of ‑‑ I have some experience besides RIPE NRO and CRO that was participating in ICANN Working Groups, also being active in local Internet community, Internet exchange in Ukraine and big pusher for IPv6 and DNSSEC, organised workshops in our country. So my role initially was to communicate with everybody and make sure understand how the process works. I learned a lot. It was kind of a jump because I never knew this part of our relationship, and I am glad to say that it improved a lot. ICANN now has a special section on registration and people have way more awareness so I hope that was important. Most of processes though still resolve around our meetings so I can form better words and more dedicated to domain and financial issues. So I will be glad to represent you guys there, but remember that it's not so important who is an NC but more important, what is being done. So, what is being done here is way more important than the ‑‑ having said that I ask you to cast your vote and if you have any questions, you have until Friday to come and talk to me. I will be around and I guess I will pass the mic to the next candidate. Thank you.

BRIAN NISBET: Thank you very much.

(Applause)

Almost perfectly on time, setting an excellent example. So, next up, Nurani.

NURANI NIMPUNO: Hello everyone. I work for Netnod, we run Internet exchange points, we provide DNS Anycast services and we run one of the 13 route name serves. When Shane asked me if co‑nominate me I asked for a bit of thinking time before because I wanted to think about what the role does and what is needed for the role. And I concluded that you need someone who is well rooted in this community, someone who really understands this community and understands the needs of the community. The second part you need is someone who actually understands the world outside of this very nice community, not only the ICANN community but the sort of Internet governance and inter‑governmental sort of community, if you can speak on one of those.

So, when I looked at that, I thought, yeah, well, I can do that. I have been an active member of this community for a decade and a half, and I have been sometimes voluntarily, sometimes involuntarily, active in various Internet governance contexts. So, what should this role do? Well, from my experience in ICANN, I think the number community is very often very neglected part of the ICANN community and the needs of the number community are often an afterthought, while I am not advocating for a larger role we should make sure that the needs of this community are properly heard, so that's what I would hope to do, I think what is ahead, well we need to make sure that this IANA stewardship transition goes through, I think I can contribute there with my vice Chair hat on. As the address 4 space gets further decompleted, we need to make sure there is proper stewardship of that as well. That is what I hope to do with the role and I hope I have the trust of the community to do so. Thank you. (Depleted)

BRIAN NISBET: Thank you investment and the third of our candidates, Sander.

SANDER STEFFANN: Hi. I have been part of this community for quite a while now, more than a decade, and for those who don't know me, I am the Co‑Chair of the RIPE Address Policy Working Group. This is the second time I am volunteering for this position. Last time it was a tie and we actually had to draw lots to decide who would win. So, but yeah, I think this is a really important position and I would love to volunteer my time to work on this. In my daily job I am a consultant, I do a lot of work for ISPs and enterprises, a lot of volunteer work as well, so, yeah, if you want me to, I'd be happy to do this task and, yeah, please vote for me.

(Applause)

BRIAN NISBET: OK. So, just to say, we have, if you are interested in reading more in the expressions of support about the candidates, I am not going to try and read out the URL, you can find all this information on the RIPE NCC ‑‑ ripe.net, under participate, Internet governance, NRO and NC nominations 2015, which is as sentence only slightly shorter than the URL itself. Yes, so there is lots of information there on how to vote, and all the other bits and bobs as well. So thank you to the three candidates and wish you all good luck.

So moving on, I will hand the mic over to Leslie.

Leslie: All right. So, next up are our lightning talk slots and just as a reminder, the lightning talk, we still have some slots open for tomorrow so please submit your presentations and ideas, RIPE 70.ripe.net, and next up is, I really hope I say this, Vicente de Luca about ‑‑ extending on the FastNetMon project which you just heard about. Also, as another reminder, we are still taking nominations for two new members of the Programme Committee, it's a great job, you get to be on foe calls with me, which I think is a huge plus and ‑‑ and Brian, and Brian, between the two of us your day cannot get any better. All right. Thank you.

VICENTE DE LUCA: So I work as a network engineer for Zendesk, this is my team, we are five guys involved taking care of our own data centres. This is Zendesk, if you don't know what it is, it's a software company that runs a platform for ‑‑ so is a ticket system and you run this on a Cloud, you don't need to install any software, any agent on your computer.

So we are talking about DDoS, right, so why are we targets? Why Zendesk is targeted, do people hate us? No. The way it works generally is when you sign up for Zendesk you would like to have your support ‑‑ your domain.com as an alias for Zendesk server and this is a good practice because if your site is down, if you are having any issue, your customers is still able to reach the support centre and open a ticket and report any issue. When we do this, if someone attacks support dot acme dot come to this is going to Zendesk not to the customer, and we have some cases that love to bomb forums or help centre pages and ‑‑

So the good, the bad and the ugly. The good is we have been relying on Cloud provider for doing the mitigation, so the mitigation phase here is, I would say, resolved. It's working quite fine. Our provider responds ‑‑ 100% times with successfully mitigation techniques. So this is OK, we are not talking about mitigation here. We are talking about detection, so the bad here is like before relying on FastNetMon and the whole OpenSource tools that I will be introducing here, our on call ‑‑ only know when the site is down, so at the point ‑‑ at this point we had the our customers in packet and we don't want this, right. So the ugly is that we have been relying only on humans to do the detection so it was working like, OK my site is down, let me check the graphs, let me check the logs and see what is going on. OK, I am under attack so I am going to advertise my target to the BGP with the Cloud provider and they do the ‑‑ this was taken too long, this was taking in some cases almost an hour to have a trigger because sometimes the DDoS is not sufficient to bring the site down, it only brings the performance down, and if you don't have proper instrumentation it's really hard to detect it.

So how to improve it? This is the list that we have been relying, the main core is FastNetMon, I don't know where Pavel is but really, really thank you for all of your time and all of the weekends spent building this soup cool tool. This is the main core without FastNetMon we cannot do anything here. We have been relying on influx DB, it's similar as Grafie, more simpler and Grafano, we have been using Radis database, Morgotis, an OpenSource tool as well for detection anomaly ‑‑ going to cover ‑‑ BIRD for the BGP Daemon and net healer, is and that is own code that we have been developing internally. This is OpenSource as well in GitHub but you would prefer to call it experimental code, I am a network engineer, into the software engineer, so you guys know how things worked here. To glue all these moving parts and to provide you an API to query where current status and attack report, etc..

So, how does this work? So, the way it works is, we have DDoS, the cycle starts with a DDoS attack targeting one of my IPs, one /32 of my network. So FastNetMon detects this attack and as soon as it detects it triggers the Ben but the Ben in my case and using the Ben from FastNetMon to populate a Redis database with the details so I have net healer listen into this database and as soon as I have a new report, net healer will see if this matches with any of the predefining policies and will trigger which can send an e‑mail, page the own call or advertise this, this is /24 to our mitigating provider.

So, I am using a quiescence of 15 seconds per /32, so if I ‑‑ this cycle, the question essence means ‑‑ the FastNetMon detected an attack, it will be 15 seconds sleeping without detecting, so after 15 seconds if the attack is still remains I am going to populate a Redis key again again and keeps running until the attacks ceases. So examples of policies on net healer is if I have two reports in the last five minutes, triggered the network operations on call, or if I have more than four reports in less than five minutes, inject the /24 route on my BGP, and this will trigger the mitigation automatically.

And I mention about ‑‑ if I have one report and Moregot detected an anomaly. This is no good so I am going to trigger the call and inject a /24 route and trigger the mitigation.



So this is customised on net healer, you can add it the variables and configuring the way you want. Why net healer? Why do we need this if FastNetMon is a silver bullet? For ISPs it's easier to configure pre‑defined thresholds. In my case I don't know how is the proper, the best threshold for my customers because today I can have my customer using like 100 max and tomorrow they have issues and the and lot of people are reaching their support and they have 400 max so variable and I cannot predict. This is why we build net healer to work better with these thresholds. So to avoid false positives we are trying to trigger different actions based on the number of attack reports and net healer also allows to us integrate with Moregot or ‑‑ pager duty or anything else you can do.

Why influx DB? It is a time series DB and speaks graphite protocol, you can remove the graphite and influx DB will work exactly as applied. Why not using graphite, because it's super simple to install and to scale and I am really a fan of it but if you know and use graphite, you will understand how painful it is to scale.

Why Moregot? Moregot implement a no notion algorithm for anomaly detection. So that means we are taking fingerprints on each 10 seconds of our traffic and we are learning that traffic and if we find an anomaly on that we are examining to trigger an alarm ‑‑ everybody here BIRD I presume knows BIRD. It's really useful for network engineers and we choose that.

How does it look? This is my panel so. You can see the vertical, the purple vertical bar means that Moregot detected an anomaly and at that FastNetMon detected an issue so we triggered the mitigation. It took nine minutes to complete the resolve because the problems of BGP converge see and our Cloud provider needs to detect the kind of attack and apply the policy to mitigate.

So this is another example of an ongoing attack, it took 8, 9 minutes, the same type as before. Here is an example of traffic without attack so the tool is useful for seeing traffic related to bps or flows, it's not only for DDoS detection, it can be used for other stuff as well. You have a/34 break down so if you have a /20 you can have the 16 /24s and you see which one is spiking so this is really good for isolating spikes and see who is consuming. It provides a rest API, how is the current status of my data centre so. In this case here we are clear had a we converge to ‑‑ which are the IPs that are being targeted and diverged to critical and all of this have different actions. And this is an API query to request details about current attack, so I have protocol, the peaks, the amount of data, etc..

And like Pavel mentioned, this is all work in progress, all of the ingredients are OpenSource, including net healer that we developed internally at Zendesk. If you guys have any ideas, issues, requests, feel free to join. Thank you.

(Applause)

Leslie: This went a little long, we have time for two questions.

AUDIENCE SPEAKER: Blake Willis. Sorry, I didn't catch that earlier, what did you say your average detection time was, just from the time of like an attack to the time that you fire out an alert?


Answer: With the tool or two it depends on the human that was on call, if it ‑‑ with the tool, we are ‑‑ today we have an average of around 9 minutes from the attack start, then complete mitigation and this is because the tool needs to detect, we need to advertise a BGP route and our call provider needs to identify the attack. So it's like taking less than 10 minutes.

AUDIENCE SPEAKER: How long is the detection portion

VICENTE DE LUCA: Takes one minute because I am using NetFlow, not ‑‑ since I am using NetFlow around one minute to have the attack ‑‑

AUDIENCE SPEAKER: And the rest is like the flow spec propagation and so forth?

VICENTE DE LUCA: Exactly.

THOMAS KING: I have a question about your influx DB set‑up, how you use cluster and how much storage you are storing.

VICENTE DE LUCA: Are you using influx, DB, we are suffering a lot with memory leak so there is a last version that is 095, if you use that version with the engine TS M1 it should resolve this. Set‑up with running with a single ‑ machine, eight gigs of RAM and eight processors. CPU is really idle, memory is the most offensive here.

Leslie: Thank you very much.

(Applause)

And for our next lightning talk we have Thomas from Flexoptix talking about the 101 of 100 gig interoperability.

THOMAS WEIBLE: Hello. I have got plenty of time now but it's still going to be a lightning talk. I am the co‑founder of Flexoptix and in our daily business and mainly what toy is playing around with plugable trance receivers out of tiny modules which bring the light on the fibre. Today is this is mainly the talk I ‑‑ make my life easier, there is questions about 100 gig and how can they interconnect with each other. That is reason why I make the talk, special also to help you guys, if you want to build 100 gig links to make your life easier. You can compare with building blocks you see on the screen. You build a house or something with your kids, either it works or it don't. It's SIM what are 100 gig. Some blocks fit to each other and some don't.

So, we do ‑‑ history and path. The 100 gig rocket already launched, two years, starting on the left side we have got CFP plugable doing 100 gig since quite while.

From the left to the right later development. In the middle CPAK very proprietiry in our picture and CFP 4, latest 100 gig results in terms of designs and port density.

And I want to state out one thing because I think this is very important for the audience about we have to redefine the terms LR or SR again for 100 gig because they are new things coming out. And I point out one thing, that is the LR 4. If you were thinking like LR 4 is still LR like the 10 gig world, you are wrong. What is the difference? When we think about LR 4 we think about four wavelengths doing 25 gig each and different grid and 10 kilometres, right. But this is the building blocks up there from 1295 to 1310. This is the space we are having for LR 4, five up to 10 kilometres, great, but expensive. There are some people around who said well, if we want to do 100 gig on ‑‑ 10 kilometres is way too much because in co‑location or data centre 10, we will never have such use at least in next couple of years. We need something cheaper, so developed a WLR 4 and other people call LR 4 light, at the end of the day all the time, different spacing. You see from 1270 up to 13 ‑‑ this is not intra operable at all so LR 4 to don't work at all, so to sum it up: I first wanted to make a slide showing all the possible matching of the optical interfaces but this would have blown up the slide. I did the other way around. I said what is not possible and I pointed out on the orange bar these are the data centre co‑location connects, up to 10 kilometre. So we got in the first row SR 10 and what is so special about that: This is what is used for CFP and 2 and quite a lot of people who are running 100 gig since one or two years now and for example, interconnecting the Layer 2 ‑‑ they use CFP SR 10 on multimode but based on the technology how is set up, it's not possible to support it any longer. I don't want to say it won't be doable in 12 or 24 months down the wall, up to now it's not possible to interconnect a CFP SR on 10 technology because it's SR 4 technology, so on the multi mode side.

On the other side what I introduced about is CWDLLR 4, it's doable to do that but no one will do it because it's a surprising thing. If every vendor building transceiver these days is focusing on CFP 4 if there are no alternatives or no need to go for CFP 2 or CFPs. Another point is there are reasons to go for those older like the bigger ones because it's power and on the last one what you can see there on the bottom, the black barks this is more the metro area beyond 10 kilometres up to 80 which we know from the 10 gig world from ER or Z R, there I see basically it's, the latest factors like queues of CFP 4 do not support at that at the current stage, maybe in the future, 12 to 24 month, but at the current stage no support for that at all. So there is a little bit of clash. So, when you ask me about a summary, what is my opinion on that, it's pretty simple: If you want to do cheap 100 gig links go for QS FP 28, it's cheap and on the switching side ‑‑ it's cheapest 100 gig transceiver compared to the other factors. Now, if you want to be a little bit future proof in terms of oh, I have to operate 40 gig ‑‑ 40 kilometre or 50 going to ‑‑ parallel DWDM there are all kind of technologies for 10 gig out there already or starting to become commodity. I personally see the CFP 2 factor as the firm which will last format least a couple of years because it can provide enough power and to provide all the latest technologies. Like for coherent DWDM you need a single processor, this takes power.

So that is from my side. One point could be and I think there are some people, folks out there which have done this. Cisco is building a C path but they build it, and I know Brocade is building about ‑‑ converters might also be a solution for upcoming 100 gig solution that is you can use older slots at least in the line cards for latest technologies for latest transceiver components like QSP 28.

All right. Thank you very much for listening. This was my lightning talk. If there are questions, I am still around? Thank you.

(Applause)

BRIAN NISBET: Thank you. Are there any questions?

AUDIENCE SPEAKER: Kemal Sanjta, I work for Facebook. It's more of a remark than a question, but basically the main problem with the 100 gig interoperability is the fact that vendors are releasing hardware and line cards that supports 100G but they are not testing interoperability between themselves and the other vendors and I think that is a main problem 2010 comes to interoperability. If they did it, we wouldn't have to spend months and months of troubleshooting, oh, what is causing CRC errors between this particular vendor and this optical vendor or what is basically causing this particular on the line card not to get up basically so. That is much bigger problem. Basically there is no fundamental interoperability testing between the vendors in the first place and if we sort it out we might be in much better shape. Thank you.

AUDIENCE SPEAKER: Blake Wizeo. Just a quick question about tunability interop. Have you seen vendors, Juniper, Cisco, others, being able to tune your third party optics from the CLI effectively?

THOMAS WEIBLE: Specially for 100 gig?

AUDIENCE SPEAKER: Yes.


Answer: I haven't seen it so far, no.

BRIAN NISBET: Thank you very much.

So, that is the end of this session. However, the first day of RIPE 71 is not over, oh, no. There is more. At 1,800 there are two events on, one of which is the BCOP task force which is in here, and the other is a BoF session on Internet based network modelling which will be in the side room. So they are due to run for about an hour, until 7:00. But there is more: At 18:30 in Champion's restaurant, which I think is the restaurant across the way there, you can meet the wonderful bubbly, exciting NCC board, plus they have beer. This may or may not be important to you. But they are lovely even on their own, even if they didn't' have beer. At 1900 those of us who don't wish to meet the NCC board can just turn up for beer anyway as the welcome drinks will be in the same area at 7:00. We still have space for some more light thing talks this week so please submit some more. As the chips your brain should be telling you, rate the talks. Neither Benno nor Leslie are shouting at me, thank you all very much and have a good remainder of the day.

(Applause)