Open Source Working Group session
19 November 2015
11 a.m.

CHAIR: Good morning everyone. Welcome to the Open Source Working Group. This time we got a bigger room, so I feel we are getting more and more important; that's great. Thank you all for your support.

I have a few things to go through with you at the beginning. First of all, welcome. We should select a scribe, and Michael volunteered, and he will do the minutes for us, thank you very much, Michael. We are to finalise agenda, so is there any additions to the agenda that was sent to the mailing list? I don't see any hands, so thank you very much.

And now we should approve the minutes from the previous meeting, but there is some technical issue which I don't know what happened, but they are not online, and I really don't remember if they were sent to the mailing list. So what we are going to do is send them to the mailing list once more, and we can approve them with the minutes from this meeting on the next RIPE database meeting. I apologise for that, but there is some technical issues.

So, that's all from my side. I forgot to introduce, my name is Andrei Philip and this is Martin Winter and we are the co‑chairs of the Open Source Working Group. And now I will pass the mic to Martin because he will introduce the first presentation.

MARTIN WINTER: Okay, so, there is quite a full programme today. The first presentation I hope is in the room, I haven't seen him yet. So, the name spelling for me, but Ishida from NTT he is talking about GoBGP, net another Open Source BGP demon, it was mainly done in the SND ROA which was the demand for it. After that we have a discussion from, he is talking about free router, it's a really interesting project which I never heard about it up to about two months ago when he contacted on the mailing list, it's like he did an amazing complete implementation of many routing protocols so he will talk a little bit, you may never heard about that project either, so very interesting there. And then we have Jan talking afterwards from like the input handling especially if you are involved in Open Source project, testing there from like how you basically from developing better code.

We also have a few quick lies thing in talks at the end. We have were Robert from the RIPE, basically about Open Source toolset, like the Atlas part, then we have Sacha from DE‑CIX, he is talking about a Java interface to the Atlas probe, and finally we also have Peter Hessler, here from like the open BSD side talking from the open BSD open BGPd, what's going on there.

With that I welcome our first speaker.

ISHIDA WATARU: So, I'm from NTT Software Innovation Centre. First of all, I thank you for having me to this great meeting. Today, I'd like to introduce GoBGP, yet another Open Source BGP day Monday. So GoBGP is an Open Source BGP implementation and hosted on GitHub, and as you can see from the name, it is written in go, and I forgot to write about the licence. It is a patch 2 licence. And the main target applications are GoBGP is first a high performance route server for internet exchanges and second is integration with data analysis systems, and third is BGDP demon for white box switches.

So, the reason why we started to develop another BGP implementation is because SDN era has begun. And today I don't go into define strictly about the SDN, I just using SDN for ‑‑ I mean, the network engineers are starting to write a more and more softwares and this trend can be seen during this meeting in many sessions, and in, for example, in add RIS hack‑a‑thon. And so we need an SDN native BGP implementation so we started this project.

And what is SDN native means for us? The first is high performance. So existing Open Source BGP demon is mainly single threaded so they can only exploit one C G all. BGP can exploit multiple core thanks to Go language and aimed to be run on a modern and commodity hardware which has many CPU course and tonnes of memory.

The second is an API first architecture. Existing BGP implementation mainly CLI first, so if you want to do some automation or integration, you have to use "Expect" literally and it is very painful, you know. So GoBGP uses gRPC. It is our PC framework initially developed by Google and Screer, it uses HTTP 2 for transport layer and protocol buffer for C serialising and it has ten language binding, so integration with your software is smooth.

This is gRPC's website. It's support C, C++, Java, JS, Python, ruby and so on.

The last one is vendor neutral configuration model. So existing BGP daemons configuration varies, so we sided to use open config, it is YANG model for BGP. It is discussed is in IETF. And on Tuesday this week, Cisco has just announced its IOS starting to support this open configuration model and this means not you can use GoBGP's configuration file for Cisco's routers, but the model is saying so you can write a script to convert the GoBGPs configuration file to Cisco's configuration files and they'll want to mapping, because the model is same, so it's getting easier to switch different BGP implementations.

Okay, so for a simple review. What SDN native means for us is first high performance, and for this we use a Go LAN and exploit the multiple CPU course and taken this ANI first architecture, we use gRPC for this, and lastly is vendor on figureration model and we use Open config for this.

Next I'd like to share the basics of GoBGP. When you install it it comes with a tool binary, one is called GoBGP daemon running and GoBGP, this is CLI tool but it uses a gRPC underneath, so you can use a CLI tool to configure daemon, but also, you can put your own software and configure or retrieve stats from daemon.

And the CLI tool looks like this. This shows neighbours, and this shows the detail of the neighbours. And you can see the global RIB like this. And this is ‑‑ this shows what the GoBGP issues are are, this supports not only GitHub but notification support it has. So this is a GoBGP monitor command, if you type this you can get the nodecation from GoBGPd when best path has changed or some withdraw has received. You can do the same for the neighbour's status. So if the state machines state changed, you can get notified.

If you pass the‑J on the output will be in the JSON format so it is easy to path, and even more, if you write like this sin packet, this is in a Python, when you run this sin path result in this, so, you don't need any "Expect." So it is clean. And error handling is more easy.

Next, I want to talk about target applications. The first is high performance route server for internet exchanges. So, BGDp can be run as a route server. It supports multiple ribs and flexible policy enforcement points. I know in this field, the BIRD is really doing a good job, but I also heard from operators, they want to have a backup implementation. So, for the first stage, we want to be that backup implementation, so, we have a route server implementation.

The support policy condition and action is like this. We can use a prefix source and I forgot to add a destination neighbour and AS paths of length an content and extended community and community, we can use a regular expression, and RPKI validation result as a condition. And the action is permit/deny. Add /replace /remove extended community and AS path and the arithmetic operation is also supported.

The second is integration with data analysis system. In this use case, GoBGP is just ready for BGP subsystem, and we already have some users for this use case. The BGP MON, which is a project run by Colorado State University. This is a realtime version of a route goo and a fast net MON which has a presentation on Monday. So in this mode ‑‑ a GoBGP is just BGP, it uses a gRPC to integrate with your data analysis system.

And the last thing is BGDp daemon for white box switches. There is a big wave of open networking. The network commodidisation has started and expansion of the use of white box switches, especially in the hyper ‑‑ something is starting, so GoBGP can be run on top of these type of white box switches so GoBGP already ported on Cumulus Linux and open network Linux.

This shows how it works. So, the FIB modification is done via the Z API and NetLink, that is an API for Zebra and Quagga routing daemons, it can do IPv4 and IPv6 Unicast modification. And other FIB modification is directly done via net link. So, that API just support v4 and v6 Unicast FIB modification but GoBGP also supports ED P N which is a technology to contract L2 VPN and exchange a Mac address between a BGP daemons. So, in that use case, we have to do Mac 4 database modification, so, for that, we directly use a net link to configure the FIB in that Linux networking subsystem.

And the use case is, one of the use case is an EVPN plus VXLAN. As I said a Mac address exchange occurs in BGP and we already checked the interoperability with Cisco and Juniper box at the interop Tokyo in this year. And for this interoperability check we used a couple must say Linux switches.

And other features, we have a full route MRT injection feature which can be done less than one minute. So, you can use GoBGP for testing your new gear, to check your new gear can properly handle the full route. And route monitoring feature, NRT dump, and BMP is currently supported, you can use MRT and the BMP for route monitoring. And the route reflect feature and add path is also on the road map. RPKI validation and FlowSpec and VPN support, we can do EVPN and L3 VPN things. We also have VRF.

Okay, so this is my last slide. I'd like you to try this out and have your comment or feedback and patches is more welcome and also a star on GitHub is very welcome. So thank you very much.


MARTIN WINTER: Thank you. Questions, please, at the microphone if you have.

AUDIENCE SPEAKER: Thomas King from DE‑CIX. Actually that's not a question, it's a comment. You know, we have BIRD out there and most IXP use that router it does a good job and thanks. The network guys for developing it but I think we need a second implementation that scales to the same levels as BIRD does and for me the GoBGP stuff looks really promising so thank you for that work and I hope we can work together in the future to make it seen more scaling.

MARTIN WINTER: I have a quick question, how about multi‑port support. Do you basically v6, v4 do you support it over same session or is it split into two different ones?

ISHIDA WATARU: Currently we use a different BGP session. We don't have a multi‑session ‑‑ you are talking about a multi‑‑‑ multi‑protocol on the one BGP session, right?


ISHIDA WATARU: It is supported, yeah, it can ‑‑ yes, it is supported. You can do IPv4, v6, blah blah, capability exchanging on one BGP session.

MARTIN WINTER: Okay. Thank you. No more questions. So thank you very much.


Okay, next we have Csaba Mate.

CSABA MATE: I will briefly explain my slides but after I will show you some comments from a live network. I think that it is be better.

So, I am the author of this ‑‑ I am the single ought of this router and it started as a hobby and currently it is but I think it is now reached the quality that I should talk about it.

So, basically Java code, but the idea behind is this that two small ports for reason like ‑‑ so the belief is that when you do small parts of the code and it is easier to check for correctness.

So, it's a Java code but you can combine to by GECC or anything like, but I do not suggest you do so.

And this code is very frequently tested by automatic tests. These are test cases when I bring up some routers they are connected in the network and I send traffic through these routers and I check if, for example, the ping works or not.

I have about 1,000 of these kinds of tests, but this number is increasing.

The architecture: So, basically, you have as many filter routers in a single server as you want. They are unprivileged or Java codes and one of them is a full router, so they send and receive packets, they handle Arp, they handle IPv4, IPv6, and the packets comes through UDP circuits. It means that okay, UDP circuits ‑‑ of course, for this to work ‑‑ this means that you can separate the roles between server and we can talk about these socket processes as a something for example because they are communicate with the UDP you can separate them from a single server to multiple server, and moreover, you can, for example, you can separate them geographically, and these processes can be configured to give some resonance to the others so they can synchronise automatically, config, and if ‑‑ which can use your frames, then the line can't host and that's it.

So, this process are a bill step frames, I need a full year two frames so added headers on it and they process all of these packets, but you can configure the router to put these packets to other JVM processes and you can start any kind of ‑‑ for example, you can start forging image, I use this for interoperatable testing, but you can use it for any purposes.

You can give up the packets with the ‑‑ but I have some grown utilities for it, for example, this is a bit faster than a socket, but nowadays, I also wrote a zero copy utility which is much faster than the socket.

And there are other utilities from the history of this router. They are used for mobile applications, for example, an HTC framework. And there are points in the code when I'm doing a table and this happens, for example, the MPLS table or routing table for VF, at this point the table exports could be done to any kind of systems. But that could be.

A list of features. I want to mention about them. I hope that anyone can pick up any interesting topic from it. But yes, it does full contra plane and data plane for all these. You can see a wide range of routing protocols. I hope that there is no others which I commonly use and I I don't support them. You can see a lot of kind of LSPs which I can support, and some Crypto algorithms and other things.

And there are, because of the Java code and a very well ‑‑ I hope very well designed application, you can put unlimited here or ‑‑ on a single packet.

There are encapsulations and small servers. I need small servers that are fully functional servers for example in the case of HTTP it's not configured as a patch, but for testing purposes it can do it.

The key question is the performance. So, it really depends on what you want to do with this software and the other key is that how fast is your CPU that you would like to use for that work, but the code is designed that there are no limiting design factors. I mean that vendor in the pocket arrives to the router, then it gets through the socket, the socket interface and until this packet leaves the router, it will work on one thread, but, there are other excellent utilities that can speed a single four packets to multiple EDP4s, for example it is useful when it ‑‑ some interfaces but they are ‑‑ single interface, in this case we can say that the single phase interface can split to ten or 11 or 1,000 or 100 threads, working threads that are forward and make the encapsulation, etc., so it means that ‑‑ so the code can be 2 megabit or 10 gigabit or more.

That basically when you do single IPv4 or IPv6 forwarding, I see that the same amount of CPU time is needed to pick up the packet and put into the UDP socket and the forwarding code, they are ‑‑ so, 50% of CPU time.

Exit numbers, for example, you can imagine, it can do 10 megabits of routing, or by the way you can put this code on your cell phone and you can configure it to do BGP, just for a test.

And at the other end of the vendors, in our company we deployed actually, on that box we split up a single 10 gig interfaces to 5 floors and it is ‑‑

How we use it. I worked for another company, at that point we produced physical routers, about 30 of them. They were for mobile interfaces, so they had four mobile interfaces, and two or fro Internet faces, but ‑‑ now, I work for this research ‑‑ so this research supporting network, and now we use it for route reflector and ‑‑ and originating our prefixes with this and now originating for spec blackholes with this.

This is the primary route reflector in our log and network ‑‑ and we have a served a ‑‑ with another vendor.

In our network we ‑‑ cause we need reachability information of BGPs just to know that if they still exist and we can use the prefixes or not, we call it NAS attacking. At the case of the Chief Executive Officer, our box is capable of mirroring out a config from a single interface, for this we have ‑‑ we also participate in NDP and traffic engineering passage just to be able to receive these and mirror the packets. This is the kind of interface that I talk about before.

And addresses which we currently participate is IPv4 and IPv6 Unicast, Multicast, VPN and VPLS and all these on the slide.

I have talked about route reflectors. And we have a test ‑‑ we are planning to use EVP for ‑‑ at the centre, for this we have and we currently do some testing on EVPN. I think this case is shows that it can interoperate with other vendors in the case of EVPN her EDP in data protection and in contra plane. Moreover it can terminate and put document router in this EDP and bridge gaps.

And for testing and other purposes, I use it putting other vendors and connecting my stuff with it and testing interoperability.

Questions a bit later. This is the point where I would like to show you some comments.

I can issue ‑‑ you can issue very simple comments. I can do some trick with them. For example watch them. In this case it refreshes, or, it has another version, I call it display. Probably much better example would be this one.


This line changes and it would signal it to be by putting a mark ‑‑ it's basically an interface based configuration probably show we we should see the interface. There you go. A bit different but probably everyone can read it.

So, this is why I put in it here is that in this router there is no table, so everybody must be on a VRF, and this is why I put it here, but you can find some interesting things here, for example, this one or this one. They are ‑‑ they ‑‑ this one has been tested with the big vendors and it seems that it works. This one, as far as I know, is only existing in this code, but from ‑‑ it seems that it...

In this case, we can see that we have some neighbour switched baths and we do not advertise any point to point or ‑‑

Probably the more interesting is the BGP. And I'm going to continue with this, probably it will fit in the screen. So here we go. This is route reflector, addresses of the routers. The state if you do 32‑bit AS or not. If the it is capable of route refresh or not and AS numbers and some others which are negotiated. You can see that we, in the case of IPv4, we negotiate a lot of flow cast. Do you not see any prefix codes here because they are on other...

At this point you can choose some something, probably we could talk about other groups, or RPKI, that's another topic, or Unicast is much more interesting I think. The AS number, the prefixes, they are different because it could be configured to do software config or not, so the receive packets and do some update on it or not. Currently, we do not use it because it's route reflector but we do not need it. There are two other counters, the number of prefixes that I do not advertise and which are advertised.

You can see this is a subset of a full table, because the route reflectors do not need the full table, but some full table appears. Two of our boarder routers are sending full CID but I have tested it with prefix and it easily handles much more feeds. But currently two full feeds came in and about five or six full feeds go out from this route reflector. This is a single peer, you can issue the central comment, you get the config or you can dig in for some reason and see what the other times that's accepted, etc., etc.

Probably the most interesting is the status of the peer. A bit technical output, but I hope it reads well.

The reachable state status which comes from the checking output. This for some reason currently... graceful is probability for the pier ‑‑ so send and received. And the additional path in our network we do not do additional paths for some reasons, but AS it can do paths over the ‑‑ so for example, add paths for EVP or EPLS. And prefixing ‑‑ and the trick is it changes frequently. This is why I cause ‑‑ I do two types of best path calculation.

And the same, you can visit on the best path statistics. You can see the same. So we have as many full best path calculations, and it took about ten seconds, but when the number of changes is small enough, then we want to implement best path on the changed prefixes and in this case it takes milliseconds and in this case this is much faster compared to our vendors route reflector. This is why, just some things before I have written here to the routing group, and please comment on it, it's about how to force out from other PEs to use a much faster route reflector than others. I call it ‑‑ I think it's over. Thanks.


MARTIN WINTER: Any quick questions?

AUDIENCE SPEAKER: I have questions for you. I have seen your implementation of contra plane on Java, which of the have some disadvantages when you work with such a big amount of data and big amount of of data like routes in Internet. I have a question for you. How do you deal with garbage collector in Java and how it affects your performance in your router?

CSABA MATE: I use GR garbage collector which are very, very fast. I have some... here you have the garbage collector codes, these are also milliseconds. You can see that the biggest it originated was 4,000 milliseconds ‑‑ sorry, they are multiplied by 8. So I measure in every second and I measure the time passed on that second. So, it means that this line represents the 1 second and this line is which are measured for that second that the time passed, so it means that a half second usual which the garbage collector could submit. But this is ‑‑ for example, you can see that it has a lot of fronts because we need it for a feed more table out ‑‑ useful for us. Basically, you can combine this code with GCC and in that case you don't have to use a garage collector but on that case I needed about 10 or 20 or 50% of decrease in the performance. More over, GCC don't use the extensions of modern CPUs you have to combine for that CPU and probably in that case you can sues for ‑‑ Crypto for example...

AUDIENCE SPEAKER: I have another question. It's about timers. As soon as you have logged in your codes for garbage collector, don't you miss timers for protocols and how do you deal with timers in protocols?

CSABA MATE: For example in case of BGP the open message carries the whole time and from the second computer and the other timers. In the case of best paths it's ‑‑ something ‑‑ based and it also something based and so when I go to calculate the current paths and BGP gets notified by a normal interface err from that loss if it happens. In this case BGP will do probably a big batch of ‑‑ and that's it.


Okay. Next up we have Jan.

JAN VCELAK: So, hello, I am Jan, I work with cz.nic in research and development department and I am one of the developers of ‑‑ this about secure input handling most work in network daemons but I will mostly talk about experience we gained when developing node DNS. This topic is really large, and I definitely won't cover everything I would want, but I tried to pick up the most important stuff which relates to our testing and testing of the impacts to the programme. So, what can we do? Do you make some slight input processing? So we can definitely use some defensive programming techniques. We can use static analysis. I won't talk about static analysis because this is ‑‑ it can find some bugs but usually it works very well for some small units with, I don't know, scholar inputs. For some large inputs from network or from the users, usual, dynamic analysis does better, so I will say something about dynamic analysis in the end I will show you how we do coverage guided for the testing, which is closely related to the dynamic analysis.

So, as far as the defensive programming. Well, what can we do? We can do some input sanitisation, we can check all the buffers, we can expect that any operation in the code can fail, so we check for the error status after every operation, which is fine. The development process is also we can incorporate it into the development process. So, we are trying to write a well arranged code which we can preface to the code which one doesn't understand the next day.

We are doing code reviews, we are writing testing. Maybe I didn't have to mention it because everybody is obviously looking at that.

So, to show you some example of the defensive programming or what we do. When you are writing a network accept some input from the network, you usually get some data, these data are structured into fields and you have to get the values of the fields. What you can do, or there are some repeating patterns, it's always data input, check if there's enough data, read the value of the field and seek to the next field. It's always the same and from these four operations I mentioned, retrieving of the data, checking the buffer reading the value and skipping to the next field, only one of these fields is actually doing something important. So, for example, from this reason, we abstracted this approach to this structure we called wire context, so initialise the wire context with the buffer, the size, the context keeps the position in the buffer and then you just doing the reads and like expect that all the operations will succeed. This context will never read behind the buffer, it's always checks the boundaries and if there is ‑‑ if there is not enough data in the buffer, it will just return in this case probably zero, and it will set some flag which indicates that there was an error when reading from the buffer. So, you can do ‑‑ you can abstract all this checking into this wire stretch and do the error checking in the end and you still have guarantee that you don't touch the memory outside of the buffers.

And it's much easier to read such a code than some code which would check the buffer size after each of these reads.

We are getting to the dynamic analysis. Because, dynamic analysis, what it means is that you are instrumenting some running code. Dynamic analysis has the advantage that it's closer to the development process. You can like use the analyser tools directly during the development, so you are closer to discovering the bugs, you discover them earlier than with static analysis or any other testing.

The problem of dynamic analysis is that you really have to reach the fault state in the code which means that for example, you have to cover the test with your unit test or initiate a state manually, but you will never cover all the paths, all the possible paths in the code. The dynamic analysis is usually a single proposal. There are tools which do just one thing. For example, there are tools for analysis for threads analysis, etc. From the tools we use, we use well trained, usually mean when they are talking about Valgrind, they are talking about workind day actually being the (something) tool which is part of the Valgrind, which is in fact a vet of tools as a Valgrind provides these profiling tools, these are useful for thread analysis. There is also a nice Clang compiler, this is part of LLVN, and it provides address sanitise err, memory sanitizer, lake sanitizer, which is what we want and also thread sanitizer, and also some other sanitizers which are not present in Valgrind:

So I will focus on the sanitizers for the memories because this is where we deal with inputs from the user or from the network. You probably ‑‑ you are probably familiar with Valgrind. This is a virtual machine, it takes your binary code and does some translation in the background and then it runs the code in have virtual machine. The problem with this is it's very slow because it's running in a single thread. There is a lot of these translations. It heavily depends on what your programme is doing. If you are still running one loop in the code, it will be probably faster than if you are jumping from one part of the code to another.

It also uses a lot of memory. But it's great that you have to ‑‑ you can analyse programmes without, in binary form without a need to have a source code. Valgrind can check out of boundary access, initial Alexa, memory leaks, some invalid use of resources. And it basically does ‑‑ it basically wraps the memory allocation functions and for example for each allocation, it adds some red zones before allocation and after allocation and then it uses these red zones to check whether the memory was over written or not. It has a limited support for checking of global object and stack variables because you can easily add these red zones onto the stack or around the globals without an acceptor or compiler because the code translation would be very complicated. So, Valgrind can do that very reliably.

And address sanitizer from Clang, I highly recommend you, if you are not familiar with address sanitizer and doing a C development, take a look at this project. It's very easy to use and it's ‑‑ I would say, mostly equitable to men check, it is a different approach. You address sanitizer requirements and instrumentation code insertion during the compilation. It basically reps all memory accesses with some code and and the advantage of this is that it's very fast. It doesn't use as much memory as Memcheck, and it can do checks for the global variables and for variables on a stack.

So this is essential how the other sanitizer works. I have the slide here because it's a requirement for the fuzzy testing.

It enters this instrumentation code and when the programme is started it reserves all the usable memory. 1 eighth of this memory is called shadow memory. This shadow memory basically keeps track of all the other memory if it's allowed to be used or not. Basically, 8 bytes of the memory are always, or their state is tracked in one byte of the shadow memory.

This expects aligned access and this is this thing for accessing an 8 byte structure is like this code, this is actually the code, the address sanitizer will insert during the instrumentation. Let's say we are accessing some 8 byte structure on 8 byte aligned address, which is on address start variable ADDR, and what the first thing we have to do is to compute the shadow byte, or the address of the shadow byte, which means that we just divide the address by 8, this is this byte shift, at some offset and we get a shadow. And this shadow byte, if it's zero, it means we can write into this memory. This memory is available. If there is some other value, it means that there was a problem, and the address sanitizer will bring this trace and crash the programme or a Sec fault. So, this model here is basically what this picture shows, any memory is translated to the shadow memory, and if the programmer by accident tries to accession the shadow memory, then during the same computation, you will get to this bad memory which is protected by the operating system and the programme will crash anyway. So, you are either valid or invalid memory and all this problems are being detected.

So, now we are getting to the coverage guided fuzzy testing, well the fuzzy testers usually work the way that they feed the programme with some inputs, and when the programme crashes, then they found the problem. So this is exactly why ‑‑ how most sanitizers work, and the sanitizers, the coverage covers sanitizers as well. While generating the random input is optimal, it's much better to generate the random inputs in some smarter way, this is where the coverage information are very use of. Basically any coverage guided fuzzy testing tool works in this way, this is the loop what the sanitizer will be doing, it just has some corpus with data. It takes an input from the corpus, some smaller modification of the input. Then feeds this input to the programme, if the programme crashes we have found the problem. If the programme didn't crash, we take a look at the coverage. If the coverage discovered new code path. Then this new input it added into the corpus. And basically, the essential step is modification of the input and this makes the coverage or guided fuzzy testing tools.

So, from these tools, we have experience with these two. The American fuzzy lop, which is pretty well known, I would say. It found a lot of real bugs in a lot of software you are using everyday, browsers, libraries for processing images, etc. It uses sealing hooks to is hooks to the sealing process so you need the source codes. It adds some instrumentation to the code which basically proximate mates the coverage of the code or at least can discover some new paths in the code. And this tool is very smart in sanitising the input. It uses some kind of genetic programming and combined with random inputs, if you take a look at the project website, there are actual link to say some very interesting articles like how do generate an input which will be valid J peg and so on. So, it's a very smart tool and it also supports some experimental testing of what binary speed compilation and Q EMU emulate err.

The LibFuzzer is the other two, it's heavily inspired by the AFL. It's actually part of the LLVM and of the latest version. It doesn't depend on LLVM but it's dependent on the Clang sanitizer, because it requires the coverage support which is implemented in this Clang sanitizers.

It's doing real coverage, no approximation, so if you have some input corpus, you can easily see what parts of the codes are really covered.

And the input modification is not that smart, but they have some other smart ideas like instructions traces, so, for example, if you ‑‑ in your code, are for example doing some string operation with buffer, this instruction traces actually show what's happening with the little bytes in this buffer and can like modify the input to discover a new path based on what this string operation did.

And now we are getting to the end. I wanted to show you how easy it is to use the converge guided if you seeing because we recently discovered a few having bugs with this which have been in our code forever. The first thing you have to do is to write something for the driver, it's easy with the Fuzzer, basically you have to define function which has this name. This function will get the buffer with the data will get a size of the buffer. And whatever you do here, it's actually your test. So in this test we are actually using not DNS packet parser, we just feed the packet parser with the buffer, random parsing and free the packet. If there was a problem with this code, this code would crash and we would find a problem.

So, the next step is to get the Fuzzer library, which is pretty easy, you download the source code and you get to this lib directory and issue this command. This basically translates the LibFuzzer code, and with this third line you can get a static library which you can link to your driver.

Then you compile your test driver with this library. You mustn't forget to enable the sanitizer for the test driver because otherwise the programme wouldn't crash if there was a problem, for example, when accessing ‑‑ when there was a problem which the sanitizer usually discovers and you have to enable the sanitizer coverage, because the LibFuzzer is using this coverage to detect the new paths. Then you just get the binary, which you will start and that's all you have to do. Because LibFuzzer don't have to ‑‑ you don't have to provide LibFuzzer with any inputs, it will generate the inputs from scratch or it can generate the input from scratch.

Okay, so what happens if we find bug, the problem will crash, it will bring the stack trace, if it's an overflow, in this case you would also get information where this variable is allocated, get exact location of where the memory was over written. And you will get also the output, which input caused this problem, it's also written in the file, so you can use it in your test corpus and do whatever processing you want to do. Then you have to fix the bug, and that's all. You basically fix the problem, recompile your test driver and can run it again.

So, thank you for your attention. Hopefully you have learned something new and if you have any questions I will try to answer them.

AUDIENCE SPEAKER: Robert Kisteleki, RIPE NCC, I have been accused of being a geek, but I actually like this. It would have been educational to show at least an example bug that you actually found, that, look, it can discover such things. You know, just for our education.

JAN VCELAK: If you take a look at American fuzzy look, there is a list of projects and bugs which were discovered by AFL, and there is a node DNS with two links, so you will find the two recent problems we found with this. It was in, I don't know, parsing with some strange formatting and Nap...

SHANE KERR: I think this is a really interesting presentation, thank you, I think this is exactly the kind of thing I want to see in this Working Group. I had a question though. I was looking at the library, or the approach that you are taking for finding bugs where you set a flag in a context and then sort of check it later. Have you discovered ‑‑ that seems a little bit dangerous to me because if you refactor your code, you may not do the checks at the right place. So have you discovered any problem in usage that way? I understand the tension between not wanting to lose the logic of your code with a bunch of error checks, that's C programming. But on the other hand it seems a bit dangerous to me, that's all.

JAN VCELAK: It could be dangerous ‑‑ we did it only recently, like I don't know half a year ago, and my impression is that the code is much easier to read and in DNS there is a lot of the resource types you have to pass, so it's a lot of code, and it's always the same so if you just wanted to review the code for potential problems, you easy get distracted by error check on every second line. So, till now my impression is that this approach helped me, yeah. Maybe it will bring some problems in the future, we don't know.

SHANE KERR: Maybe another presentation two years from now explaining how you fixed ‑‑ anyway...

MARTIN WINTER: Thank you very much. We are coming now to the lightning talks, a few short talks. First we have Robert from the NCC.

ROBERT KISTELEKI: I am Robert Kisteleki from the RIPE NCC and in this case representing the Atlas development team and I'm going to talk about a toolset that let's you interact with RIPE Atlas and how we got to it and so on.

There are people out there who don't like graphical user interfaces, believe it or not. They prefer terminal based things, they prefer actually running ping and TraceRoute and so on instead of interactey with Gooeys and Clicking and try to parse chasing. That seemed to be a roadblock for these kind of people to actually start using RIPE Atlas. Also we have seen there there are several people out there who ended up writing basically the same tools over and over again because they wanted to twist this particular flag and change that, so I cannot really reuse what you wrote out there because I need it to be a bit different.

So, we felt that it might be a good idea to actually write something that multiple people can use, it's generic enough so that you can add some parameters and options and it will do basically what most of you could imagine. It will never do a hundred percent but we try to be as good as we can.

So we built this toolset. It's available out there. It's Open Source, I'll talk about that bit a bit later. Basically, at the moment, it allows you to interact by RIPE Atlas. You can search for probes in the system. You can say give me probes close to Amsterdam, or from this AS or from that country and son and so forth. It also allows to you search in measurements. So, for example, if you want to investigate something like a network event, then you can say what were the trace route measurements in this time frame and then it will give you a list of them. You can also ask it to actually do a measurement and give you the results right now.

So, if you put this into the Atlas context you can imagine that you can trace route and ping from the command line and say, oh, give me five things from the US and then make them trace route to my address. And then suddenly what you will see is five traceroutes on your console. Obviously this needs credits, configuration, it's an advanced functionality but you can do T you can schedule measurements for the future. You can look at the results of previous measurements and so on.

Recently, we add add feature that let's people aggregate the results, so it's no longer just output being dumped on you but instead you can say and please give me a report per AS with source AS in a measurements case or per country, which let's you really dig into this and you can see that, oh, from country X, the connectivity is fine, from country Y the connection is not.

This is a very, very short example. What I did here was I tried to ping a RIPE Atlas anchor in our network and I said please use these problems and name it something something, so the system went out, created the measurement and then it said well here are your ping results. Nice and simple. The same is applicable for DNS measurements, for trace route measurements and so on.

So, the toolset itself is called RIPE Atlas tools. It's up on GitHub. It's called Magellan. Under the hood it is using another library that we call Sagan, so you don't have to understand how JSON is structured, you don't have to use with different versions, it just masks all these things away. It's also using another tool called Cousteau, which is an API wrapper so you can interact with the API and you can listen to results stream. At the end this is a common line toolset. You can install it and say I want top ping. All of it is fully Open Source. Come and help us if you would like to. In particular in order to make it easy to use for everyone we looking for remote hands that now how to do packaging. At the moment we are in GitHub, if you know how to do Python things you can install and it just magic happens. However there are many people who would not want to go through that problem. So, we are trying to package this into the different distributions, we already have it in open BSD, that was done over the weekend. But we don't have it in other BSDs and it would be awesome to have OS X, because if I look around most of you are using that. It would be nice get the DMG, click install and then RIPE Atlas measure. Otherwise please come and give it a try or ask me questions.

MARTIN WINTER: Okay, any quick questions? No, okay. Thank you.


Next up we have Sacha from DE‑CIX talking about more Atlas through Java interface.

SASCHA BLEIDNER: I am happy to see that many people here in the Open Source Working Group. I want to talk about jAtlasX toolset we developed here, which then interfaces with RIPE Atlas. We developed this at DE‑CIX so I am Sacha from DE‑CIX from the R&D team actually, and we had Robert before, they also are are doing toolsets for RIPE Atlas. We do toolsets with Java, so you could also why you using Java, why not are you doing Python at all? I agree that usually you should do the language that fits best the use case but sometimes you want to use the library or the language in which you are more familiar with, so if you want to use Java for using RIPE Atlas, then jAtlasX is the way to go.

I want to do a quick motivation why we developed jAtlasX at DE‑CIX, how we wanted to do research on traceroutes. So we were interested in, as a routing path. Going over different IXPs, so this is basically the work we have done for the traffic dependency talks which Thomas gave earlier this RIPE.

So, first of all, we needed to select the right tool, and from our perspective, RIPE Atlas was the right tool, because as you heard before, there is extensive coverage of probes out there, even though there are some countries out there with no probe at all, it was fine for us because we were interested in the countries where the most probes are actually. So, the second reason was there's a traceability measurement already. You can use it with the REST‑API which is pretty handy, so, we did that inside our toolset. And the fourth reason is it's pretty easy to get back the results. So if you do a measurement, getting to the results, it's handy.

As we're in the Open Source Working Group, I of course want to show some lines of code. Creating a measurement with jAtlasX is just assey easy as creating a Java class, you just go and do a trace route measurement class, it's then an object, you just need an API key off RIPE Atlas so that it counts for you credits for this measurement. And you just get the probe ID. So you select the probe ID for a source and then just an IP address for destination and you just put out a string there saying jAtlasX test measurement or something like that, and then it will show up in your RIPE system with the same string. So that's all you need for a simple trace route measurement here.

As I mentioned, you need a probe ID as a source, how do you get that? We also have some methods for this. You can gather the probe IDs by ASN. You can ask I want to have probes in this particular AS so just go and fire out this method. And it will return you a list of probes you can select from as possible sources. And then as I said, for the destination, you need an IP address and if you want to do measurements from one probe to the other you just need the IP address from the destination probe, so you can just go and ask this method, okay, I have this probe, what is the current IP address? And this is something I would like to highlight up here, it's kind of a feature request. Doing traceroutes from one probe to the other, it would be handy to have, instead of the target IP address just a probe ID so I actually don't want to care about what IP address this particular destination probe currently has. I would like to just input two probe IDs to have a trace route between those two.

We also included hand Lynn the responses, so if you want to get back the results you can also get various results back, we have this response handler interface, you put in the JSON results and it will give you a list off an arbitrary object. This is give you the measurement ID of a particular handler. With probe handler which returns you some probe information. We have the probable list handler which will get awe list of probes with all the information available in RIPE Atlas. And we have of course the trace route handler, which will handle the trace route result and give you a back a hop by hop path with the IP address in this trace route.

So, we already cover I think the basic functionality of jAtlasX so far. There is a lot of to‑dos in our list. We made it available last month so that we can check already. It's of course available on GitHub, it's Apache 2.0 licence, so do whatever you want to with this kind of software. We also within a to invite people to give jAtlasX a try, so that's why I'm here for. And there are things which are not yet checked. We want to do a single measurement with multiple probes. So right now if you want to do ‑‑ want to have 10 probes pinging or trace routing a single destination you have to do 10 measurements. But RIPE Atlas supports multiple probes per measurement so we want to support this in the future. Of course there are multiple of additional measurements out there. DNS, HTTP, just to name two of those. We are being to support those if we have a need at DE‑CIX at the moment. If we see a need for them. If you see a need for them for your use case just go and contribute to the project and implement them the same way as we did with the trace route and we are all good to go.

I think the last one is not a use case particular ‑‑ to do for us, it's more for you. So if you don't host a probe yet, go out there talk to the RIPE guys, they are pretty nice, and get one for your network.

So that was it from my side. I'm happy to answer if I questions, comments, and feel free to contribute.

MARTIN WINTER: Any questions? No, okay. Thank you very much.


Next up we have Peter giving us an update on what's going on in the open BSD, open BGP side.

PETER HESSLER: I am with the open BSD project and so Open, we just had other 20th anniversary in October on the and released our 5.8 release which we have been doing every six months for pretty much the entirity of the project time frame. Everything that I'll be mentioning, is either in the 5.8 release already or has already been commit ted to current and is available in the snap shots.

So there is quite a bit of open BSD used outside of open BSD itself. Almost everyone who connects from one terminal session to another terminal session uses open SSH for that project. The firewalls used in almost everyone's here laptop. All the Apple devices, IOSes, it's just recently added SLA sis, a large part of the Android C library is based on open BSD, Apple just add it had to IOS 9 and to whatever the newest release, 10.11 of OSN.

So, other products that we have is open NTPD, which is a time keeping thing. It's simple, it's accurate, it's currently CBE free, we've not had a single CVE in the entire history of it. We are also able to use TLS constraints to avoid third parties trying to attack the protocol.

We have a number routing daemons, this includes BGP, LDP, OSPF, Version 2 and 3, E IG R P, was recently added into current, and then older ones in case you have very ancient networks like RIB and routed.

Open BGP D has been around for about 11 years now and we have made quite a few improvements since the last time that many of you has touched this.

So, for those of you who are not familiar with, it we have pretty much all the features you need, many the features that you want we consider to be ready and available for you.

As far as scaling goes we handle hundreds of peers quite easily. At my work we have 300 plus sessions in addition to the route servers, we are capable of handling many many full needs in our testing we have gone over to 8 million prefixes installed into the RIB, and on our network, at work, we have over 1,500 next hops installed into the FIB.

A lot of people talk about the open BGP config file. We have templates, macros, groupings available to simplify your configurations. Here is the very simple explanation. So, for the grouping you can define things that there are equal for all of them. Then you see here for one of the neighbours we override the max. Prefix for them. At the bottom we have some simple signals that we allow only where the next hop is set to the IP address, or if it sets to this address, that is possibly defined as your black hole route. And we also allow you to do quite selective filter matching.

As an edge router, we work quite well. And we use it also in production at our site, we have a multi‑vendor open BGP open BGP and Juniper routers, we use it for many of our internet exchange points as well as some our transits. As a route server, it is ‑‑ it is not still in use in some places, a lot of IXPs used to use, it run in production, many of them switched over to inter implementations, mostly to BIRD. We recently have fixed quite a few things and and now we say that it works quite well as a route server.

As I said we fixed a lot of bugs. A lot of crashers, there was some memory constraints that have been fixed and we fixed quite a bit of the performance, that has been affecting people over the years.

The big thing that was recently committed about two weeks ago, is that we fixed the performance of the filtering. So, in a torture rule set case we took a large IXP's route server configuration and then added the full RIR rule set From Hurricane Electric for a total of 561,000 rules and we sent over 60,000 prefixes and from the opening, from the starting to open the session until all the routes were installed into the RIB, the performance went from 35 minutes down to 30 seconds. So, as you can see, there was a minor speed improvement there, and this was just a first run of trying to make it faster. There are quite a few things we can do in the future to improve the performance. If we took just the pure IXP configuration that they had, then it went from 30 minutes to about five seconds for a full transmit and convergence.

PF is our packet firewall. It does state full firewalling. If you want slower performance you can turn off the state full productions. It has all the usual filtering outputs that people want. It supports NAT in pretty much all the directions you need. 64, 46, 66, 44, 666 if you are feeling evil. Pretty much anything. It is also R domain aware, which is our implementation of VRF‑lite. Included with the PF engine and with the filtering and load balancing, we have a software called relayed D, it can handle as an off‑loader if you would like.

All the interfaces including the virtual interfaces and codable interfaces are interested to be real interfaces. TCP dove just works, link state detention just works, everything that you would expect from a real, for example, BGE or EM or IX driver would just work for you.

We also have a number of virtual interfaces, car park, NetFlow for v5, S VLAN, etc., etc., quite a few of these protocols are available.

We support a number of of 10 gig hardware interfaces, we can do 8.5 gigabit of stateful packet filtering on this. We recently just crossed the 1 million packets per second routing plateau. And we are just beginning to work on making the entire network stack SMTP save. So those numbers are mostly big locked kernel which is actually very close to what previously Linux can do in a single CPU single lock situation.

The work is on making it ongoing and I want to say a special thank you to oracle for helping with us contributing quite a bit of code to make PF SMTP safe, so hopefully this will be portable and every operating system that is using PF will be able to update to a more recent version and get the features that we have added and keep the performance that they desire.

These are our services, if you want to have a look at them in detail. Check out the slides or check out the open BGP pages. We are working on massively unlocking the network stack. We are in the process of adding a new hyper adviser, so we can be ‑‑ we can do virtualisation based on open BSD, which it also works well, we are also adding BFD support, and we are also looking at the DE‑CIX proposed draft RFC of doing BFD. And I have just ran out of time.

So, are there any questions?

AUDIENCE SPEAKER: Could have a ‑‑ thank you for the presentation, I actually had a question for the room, because I had a chat with Peter as well and as many of you might know, the RIPE NCC Board has been in the past few basically from last year, they were investigating in supporting some of the projects, Open Source projects which is good for the whole community and one example was cryptography tech which was done last year, and I suggested to Peter also subject to the proposal and also to the Board will most probably they will talk to the membership. But just to get a feel from the room, who thinks basically that the RIPE NCC should support this toolset basically, can I see a show of hands. Thank you very much. I think we have a board member present. Thank you.

MARTIN WINTER: Thank you very much. So, we have come to an end. By the way, the new ones here, use your chance at the RIPE meeting, talk to the different people, we have Peter here, he can answer questions about open BGP, Andrei can talk about BIRD. We have ‑‑ use the chance here at the meeting, talk to us.

CHAIR: It's amazing how new people we have currently. It's great we have a choice. I have three more closing remarks. First of all, you know, I forgot to mention at the beginning, we called for a candidates for a Chairman position for Open Source Working Group, and nobody volunteered, so we continue for next year, we will do the same call you know at the same time next year, so if you consider to be a Chair of this Working Group, come to us, we can discuss it freely. We are very open to that.

And two more things. There is still running the PC elections, so if you you know, would like to please consider it, it's easy, the voting link is easy to find at RIPE 71 web pages, so RIPE Please go there and vote.

And last announcement which I was asked by Benno and the PC Chairman, there are still some slots for Friday lightning talks, so if you consider to have some lightning talk, please contact PC and I'm sure they will provide you something back,

So that's all from my side.

MARTIN WINTER: That's it. Thank you very much.