Archives

These are unedited transcripts and may contain errors.


Plenary session on Tuesday, 17 April, 2012, 11 a.m. to 12:30 p.m.:

BRIAN NISBET: Good morning, ladies and gentlemen. I would like to welcome you to the second plenary session of this morning, of Tuesday, of RIPE 64. We have a number of talks this morning about, largely around BGP, about route validation and certification and things like that. So myself, Brian, and Osama, will be chairing this session this morning. First up, we have Joseph ?? Todd has been telling me you haven't been rating the talks. Todd can see me when I am standing in front of him. He couldn't manage it yesterday. But rate the talks. Please do log into the RIPE site with your RIPE access password, and rate all the talks because the better information we get from that, the better we can plan future RIPE meetings. So, Joseph Gersch speak about BGP route origin validation.

JOSEPH GERSCH: Good morning. I have been asked to present a technology overview of a new proposal called Rover. Rover is an anonym, it stands for route origin verification, it's a method for storing BGP route origins in the reverse DNS so they can be verified as routers as prefix announcements come in.

This is a new proposal, it's basic idea is to help protect against route IP hijacks, we all know about that, they happen usually accidentally because someone fat fingers something in a router but also happen maliciously when people try to steal a network address block from someone else who is legitimate owner.

This proposal was made by four people, it was recently proposed at the IETF in Paris a couple of weeks ago. The four people are myself, professor Dan Massey from Colorado State university who is one of the authors of the DNSSEC, University of California, who is well?known in router circles and Erik who works for Verisign Labs.

The technology was proposed and discussed first at the Quebec IETF and formally introduced with draft documents at the Paris IETF. It is a complementary technology to RPKI, it is not meant to be a replacement or alternative; it's meant to live side by side. There are some similarities and there are of course some differences. There is two basic components to the technology: One to publish, how do you publish a route origin in the reverse DNS and then once all of the data is there, address block you put the data in, how do you verify at the router itself or through some other application, is an announcement correct and should it be accepted or rejected.

The design model is classic hour?glass design. At the bottom, we have all the foundations and protocols that already exist. We figured that DNS is there, it's a large worldwide distributed database for all practical purposes in which people can store information. It's got redundancy and resiliency secondary servers, placed around the world and protected with DNS SEC. It can be updated so the data can be near realtime if you change your data.

At the neck of the hour glass we have the actual Rover access methods. Just a few small things in which you would have a data naming convention, how do you name an address block in the reverse DNS? How do you publish records for an origin? How do you authenticate it with the DNSSEC and then as you get the data, DNS doesn't always return an answer so it's a best effort retrieval system, what happens if you do or don't get it at the top, it's different applications, it's meant to enable a set of tools, building blocks, components, whatever you want to call it, because ISPs defer in their operational practices from ISP to ISP, what level 3 does is not the same as what Deutsche Telecom does or what side endoes. Everyone has a different operational procedure. Some people like to just take the IRR, the Internet routing registry and check it against this data. There is an application that can be written to say is the IRR data reasonable or stale? This was discussed a bit at the BoF last night. Some people want to build prefix filters. Once a night run an application, load it into the router the next day. It can be near realtime data, you can actually have an interface to the RPKI RTR technology so routers can look at and reject whether a route is real or not or send notifications to an operator. As prefix announcements come it can be listening in and get the prefix and check it against the data from the live DNS, this is good, here a bad one, this is definitely invalid, send a notification, you figure out what to do about it. It could be a complete set of different building blocks that are customised to whatever the ISP or operator intends to do with it.

So the first process is how do we publish route origins in the reverse DNS, and this takes a couple of steps. The first step is how do you name a address block, a CIDR address block in the reverse DNS. Today reverse DNS basically has complete host addresses, we know how to give the complete reverse address for a host but there has been to date no real method to specify a slash 16 or a /18 in the reverse DNS, so the first proposal and the first Internet draft was a naming convention to say how can I specify a /18 block, for example. It's a fairly straightforward. If you have an address like 12982, /18 you will notice that it's the usual reverse backwards 12982 but here is a new letter, M, that means mask, I am switching to binary mode and the number of digits afterwards, if there is just one it would be a /17, a /18 if you add more 19, 20, 21 and you are switching from objecting at the time here to binary mode so this is equivalent of 128 backwards in binary. So that is how would you actually have a DNS name correspond to go this address block and that DNS name can be just like any other name, a name for a set of resource records in the DNS. Once you have got a naming convention, then you need ability to store records related to whatever. By the way, this is a generic naming convention, it can be used for many applications not just routing. You could put location records, you could put where is the geolocation of this particular address block. There are, one proposed at the IETF we had a number of people come up to the microphones and ask us could this be used for my application which does X or my application which does Y. We said if you have got records that make sense that could be put at this particular name which corresponds to this particular address block, yes it's definitely a generic not just routing specific; it's a CIDR address block naming mechanism. Now when we got to routing we want to have two new record types, in a zone you would create an RLOCK statement which means my zone for this particular address block I am opting in, I would like to say that any route that comes in you should check and see if there is an SRO record, a secure route origin. It gives the AS number for a particular address.

And more records can be developed as this concept evolves by basically this can prevent route origin hijack or, because of the RLOCK record, a subprefix hijack. So the two Internet drafts shown here are, one for the naming convention and one for the record types.

As an example of what a zone file could look like, this big block here is just basically trying to showing an address block which is a /16. It's for Colorado State university so it's at 129.82 /16 and there is one announcement they make at the /16 and then they have four others for four /18 blocks. They would like to authorise these in the reverse DNS. The zone file, they already exist, and here are the new records that would go into that zone file to specify the route origins. First of all the RLOCK statement followed by their AS number is 12145 so they say at the beginning for the /16, which is this M, I would like to say I authorise AS 12145 and here is the 00 M10 M, all corresponding to these four address blocks, so for 82641218 I am authorising these four addresses. It's a fairly simple convention. It sits in the reverse DNS and it's sitting there and anybody can query for it, it's just data at that point.

Now, once the data is there, what do you do with it? That depends on you. If you want to just look at route filters and build verifications of IRRs go ahead. As I said notify the operators. But basically, as, for example, the realtime verifications come in, we can listen to a BGP announcement, do a DNS lookup, one of the questions you might ask: Is this a huge load to the DNS? And the answer is we have done a number of experiments, we have taken 400,000 and empty route table loaded in the entire route and looked at how many DNS queries go on for a cold cache in the DNS and once it's warmed up usually there is a time to live, it could be anything from hours to days, it actually is pretty good, it fans out to about 50,000 DNS servers and the load is very tolerable, we have talked to people at ARIN for example, said we can certainly support this load on the DNS and one it's cached it's going to work well. When it comes in very similar what RPKI does, we can mark it as valid, invalid or if no data exists, we don't know, just accept it we can't give you any judgment call on it. If it's invalid we could notify the operator or interface to the router to make adjustments. Other tools could be built that make use of this data.

Some people have asked us aren't you building a cyclic dependency, a low level protocol like BGP and relying on a high level protocol like DNS that, sounds absolutely back towards me. The answer is, if we had a firm, hard dependency you would be absolutely correct, we would be crazy to make this proposal but the fact is DNS and the Rover proposal is saying this is a best effort retrieval. If you can not reach the DNS, it's marked as unknown and the Internet works just like it works today. It's not been verified but you are going to get the route through. Could it be bogus? Possibly. As we get the data and usually the data is cached, we get to a saith where we can see whether the routes are correct or not so it converges and reaches stableisation. As long as you are using a best effort proposal, a fail?safe method, in fact we called it the hypocratic oath that the doctors use, at first I will do no harm, so you can pull the plug on a Rover device and the Internet routing will happen just as it exists today. It's just not marking anything as invalid, we can send you notifications or adjust the routers. Get the data, look at it, make the things, and if it doesn't get the data, for example, it tries to get a route origin for some address block and says I didn't get anything, there is no reason whatsoever the application can't go back and retry later and later with an exponential back?off so this eventually leads to the convergence effort. The default is routing works as it works today in this verification CIDR.

The current status of the Rover project is that we have a number of people interested from our proposals, we have been working on this since the Quebec IETF back about six months ago and maybe eight months ago, and the test bed was put together. There is a website called Rover dot secure 65.com, I invite to you look at it, I will be here for the next three days so if you want to see a demonstration, I would be happy to show you how this test bed works. Basically, I will show awe couple of screen shots, we create a shadow zone for in?addr.arpa, you can build a zone file on this test bed and it will be published in the shadow zone, so not the real in?addr.arpa but a shadow zone. Once it's published then we can do checks, we can say let's make a route announcement for 12982 /16 and let's give it the real AS number, let's give it a fake AS number. It should mark it as bogus or as valid. So, you can try that out for yourself, I would be glad to do the demonstration later.

Also, in addition to this test bed which is alive and well, we have several early adapters level 3, large and small ISPs in the United States, we are looking for early adapters in Europe as well that are actually in the process right now of publishing in the real in?addr.arpa their route origins. They have been a very enthusiastic about this approach, they want to have RPKI and this live side by side somewhere looking at how this could work together but they are publishing it and signing it with DNSSEC. And, if you ask, is DNSSEC since we are not using certificates, we are not as I said there is some similarities but many differences as well, we are not using certificates inside the DNS; we are just doing SRO statements and these are protected by DNSSEC so there is a trust at dot which goes into that were that is signed and the chain of trust goes to the next level down which is your own zone file, everything very well secured with publishing technique that is already out there.

A couple of screenshots of the test bed. If you have particular address block you want to look at, you can just type in your website URL, you can type in address block or AS number and it will present to you, I have found nounsments ?? we checked the BGP mandate and this is a set of route collectors, about 38 of them scattered around the world and we build a database of recent announcements and we say I don't know if this data is correct or not but I have at least seen that, for example, AS 6582 has an announcement for this /17, I have also seen one from Cogent from level 3 and from the rocky mountain Internet Exchange. Here is some I found at /24, there is some announcements. On the website on this test?bed you can say I would like to authorise these announcements and then you can say publish ?? you can create ?? September advisories and create an authorisation brand new and delete authorisations and view and save the zone file. When you do that, I thought I had another screen shot I am sorry. But it creates zone file and you can then ?? that will be published in the test?bed or you can cut and paste it into your own real in?addr.arpa zone files and DNSSEC signs T the test?bed itself is hosted under the Secure64.com domain, is a DNS appliance maker, we do secure BGP ?? secure DNS, DNSSEC automation but since we have this domain name we put the shadow zones there and we have DNSSEC tools that sign it so all the data that you submit in this test?bed will be DNSSEC?signed so we do the publishing of the data with DNSSEC and we retrieve the data we compare the signatures and make sure the DNSSEC matches and that the response that was received is correct.

Then, that is it, really. I am going to be here, as I said, for the rest of the week if would you like to find me. I have got a pretty bad cold so look for the guy with the red eyes and snuffles. I have been asked to be at the DNS Working Group tomorrow to explain anything that comes up on the naming convention. Shane Kerr has asked me to be at the IPv6 Working Group to explain how this naming convention applies to IPv6 names, it's basically the same convention except a NIB he will boundaries instead of objecting at the time boundaries and other location, geolocations and other things could be used in an IPv6 environment. Other than that I am done, I hope this is of interest to you. It's a new proposal and something that is being deployed by early adopters now, as we experiment. It is only in the draft stage. It is not an accepted accepted RFC at this point but we are working on proposing these technologies to the IETF and at this point I would like to invite any and all questions?

AUDIENCE SPEAKER: Geoff Huston, APNIC. You said it was a new proposal, in actual fact I seem to remember reading a draft to this effect that dates back to about 1998, and indeed there is this common perception around the Internet.

JOSEPH GERSCH: You are correct.

GEOFF HUSTON: That protocols and standards are nonsense because we can chuck everything into the DNS, I have got a T?shirt that say that because the DNS is so wonderful. But in this particular case I think there is a couple of problems. We certainly went down the path of the RPKI in the full knowledge that there was a DNSSEC approach, but we came across a number of problems. In your terminology, if someone has a /16, delegates a /17 who then delegates to an LIR a /18, how do you know that the middle thing should be looked at? Because in your naming model, that hierarchy of allocation gets destroyed, and all of a sudden, in the zone delegations that you need in DNSSEC, you are forcing relationships between parties who have to sign across, where they had no natural business relationship because that got occluded. That was always the weakness in the previous attempts to do this, and in looking at the drafts you have produced now, that is still a glaring omission.

JOSEPH GERSCH: Two things, yes, I would like to acknowledge ?? I said this Rover is a new proposal in the sense it has this new naming scheme. You are correct in 1998 ?? in fact there have been several proposals over the last several years to use the DNS and these were acknowledged in the drafts and I thank you for bringing up that. In the chain of ownership, we are not trying to show provenance; however, if you have been delegated from the /16 to the 17 to the 18 and you actually have a delegation for that address block, then you own that reverse DNS space in which you should be able to publish things. We are not trying to show who the owner is, if you have been authorised a delegation you own that address block.

GEOFF HUSTON: I am not sure I am getting through. In the /16 zone, the ultimate block owner, the /18, has to hand the key over to do the DS record but because it's indirect, the /16 zone owner has no knowledge; it's just this random piece of e?mail saying put in a DS record. The reason why we went down the RPKI was that after many years of exploring all kinds of options, we found that the only way that we could nail down accurate attestations of who was the current controller of every block, was to actually explicitly expose that whole allocation hierarchy from RIR to perhaps NIR to LIR to LIR. And that is a key part of this that I am not sure is actually been folded in accurately into the direction that you are looking at.

JOSEPH GERSCH: I know you have brought this up with Dan Massey in the past and we have had not enough time to discuss this face?to?face, let's see if we can have this conversation, we believe it does work, it may or may not do what you say but it meets its current design objectives, I would like to talk this off?line between Dan myself and you.

AUDIENCE SPEAKER: Warren Kumari, Google. So this only works for origination, right? There is no easy way to extend to to path validation as well.

JOSEPH GERSCH: We don't do full path validation. However on the current website we are doing some experiments in the test?bed where we added not just the route origin but if you have a transit relationship what the last hop would be. We tell people don't publish peer relationships because that is private. If there is origin and last HOP in there, it turns out that you, as you collect all the information you can build up a table of these things and you could actually do some not full path validation. We haven't ?? we don't intend at this point to figure that out, but we have discovered that we can look at is there a weird path that goes from, for example, the Telstra do?do case that happened about two months ago, it went from supplier to ?? small transit to small thing to transit and it should never have gone through that small path. You can find paths like that with this technology. It's also you can now check route origin and last HOP so you have got two checks. The average path length is usually four. There can be long paths, I have seen paths that are 13 long. But average long is four. If you can cover one and two that is pretty good, you have raised the bar. Stop route origin hijacks, perhaps last HOP hijack and maybe we can look at future takes on this to using that to say any weirdnesses within the path itself but not full path validation, no.

RUEDIGER VOLK: Deutsche Telecom. I think I have to pretty ?? two pretty separate sets of questions. Let me first start with remarking in RPKI we have parts of a design that carefully ?? had a carefully provide that any relying party, meaning the guys who using the secured information, readers, can get the full information in a reliable fashion.

JOSEPH GERSCH: This is why I say this is a complementary technology because you do some things we don't and vice versa.

RUEDIGER VOLK: In your slides and your slides there are some remarks where I am not completely sure whether you are saying this is something we can do or this is something someone could do or this is something somebody might actually be able to figure out. With looking at the question of getting reliable sets of information, in your hour glass slide you were mentioning prefix lists. Is that something where you say you can do it right now or is it like I think I heard in Paris, well, somebody might figure out sometime how to do it.

JOSEPH GERSCH: This is a set of blocks that you can build tools with and several people have experimented with that and, for example, a tool has been written which takes the IRR data and using that they have been generating prefix lists. What they are doing now is saying from that IRR data let's get a route set of information, do a DNSSEC lookup and actually found that there were thousands of stale pieces of information in the IRR. So that is an application someone has written.

RUEDIGER VOLK: We know there is lots of rubbish out there on the trash side and there is lots of rubbish out there in many databases. My question is: Can ?? do we have, at the moment, an effort in your design that allows us to get all the relevant information for a certain ?? for a certain address range in a reliable fashion so that we can use that information for computing something like a prefix list?

JOSEPH GERSCH: If ?? the way it works today is there is data published in the reverse DNS. Assuming that the data is there, we have to get a route announcement and we check against that, build a table from that as routes come in and, from that, you can ?? you are either comparing it against IRR data or route announcements and building a list. I see you are confused by that answer. Volume what does IRR trash have to do with your brand new system?

JOSEPH GERSCH: I am saying different ISPs are trying different techniques.

RUEDIGER VOLK: Yes, and there are people trying to drive on the wrong side of the road. Usually, they don't live long but ??

JOSEPH GERSCH: I understand that you may disagree with this technology.

CHAIR: Unfortunately we don't have enough time to go into more discussion.

RANDY BUSH: He has a simple question that is not answered. The point is, you cannot traverse the data and collect it ??

JOSEPH GERSCH: You are correct. You cannot traverse the data.

RANDY BUSH: You cannot build his filter list. The answer was one word: No.

RUEDIGER VOLK: Second set of questions ??

CHAIR: If it's a simpler question, great, please go ahead. If it's not, let's make it off?line.

RUEDIGER VOLK: OK. You are proposing something to give additional security information to another system that is covering the same objects.
Question: Do we have to expect that inconsistencies between the two systems can happen? I think that is a rhetorical question.

JOSEPH GERSCH: A what question?

RUEDIGER VOLK: A rhetorical question.

JOSEPH GERSCH: Of course, what we expect, to answer your question, we think some people, for example with legacy address space who do not want to have a certificate from ARIN or RIPE or IANA would publish in here and it would not be conflicting data, completely independent, they would not overlap. Where there is overlapping data there is the possibility, as in any system where there is overlapping data, for inconsistencies and then you have to make a decision what do you do about them.

RUEDIGER VOLK: Would it be considered responsibility of a proposer of a second system to actually describe how inconsistency should be dealt with and actually prove that they do not diminish the security properties of a first system?

JOSEPH GERSCH: I am not sure how to answer that question and I believe that is topic for research. I will just have to defer that for later. But thank you for your questions.

CHAIR: Let's thank Joseph.

(Applause)

CHAIR: Our next presenter is Alex Band from the RIPE NCC.

ALEX BAND: Everybody can hear me. I work for the RIPE NCC. And I am here to give you a little update on where we are and where we are going with the certification project. It's about an interesting six months because at the last RIPE meeting at general meeting, all of you, all of the members, got a chance to vote on this particular project. And the vote was close. And the message was really clear: You said OK, you you are going to proceed with this but we have some concerns. We think there is a need for BGP origin validation, we think there is a need for the ability to offer resource certificates to members that give them validated proof of holdership. However, we, as network operators, we don't want to lose our autonomy, and also you are setting up a certificate authority. We are not so sure if that is completely secure. You could get hacked into. Lots of other concerns were raised. But in general, you said, OK, it's okay you can proceed with this project, but proceed with caution.

So all of the concerns that were raised essentially fall into three categories. The first being operator autonomy. This was a long discussion and quite a hot topic because essentially you were afraid that some law enforcement could walk in with a gun and say you have to revoke this certificate. Or you have to tamper with this certificate tree or you have to change this ROA, and nobody can detect that this change was made. How do you prevent that happening? What are you going to do?

The second concern was essentially revolving around security. How secure is your system. You enable a hosted RPKI solution through the LAR portal which is protected with user name and password, how secure is that? And also, there were concerns in terms of resilience. So all of the data from the RPKI system is published in a repository. What happens if that data or if that repository gets denial of service attacked, for example, or it gets hacked into? What if the data becomes unavailable, because this will affect, in the end, when using RPKI for BGP origin validation, it affects the ?? your routing, and your ability to reach networks.

So to summarise: You said, okay the RIPE NCC can continue with this, you can offer a system that gives people validated proof of holdership and we like the idea of having something that provides BGP origin validation, which also gives you a stepping stone towards path validation because ultimately that is one of the things you can do with this system. However, in providing such a system, they must outweigh the potential risks, so diminished operator control of BGP routing and if the RPKI would fail, it would result in an unreachable network. If you can tackle those things then we are pretty good.

Now, this may be new to all of you or some of you so I just want to give a quick recap of what this system actually offers:

So, resource certification, what we offer is linked to the registration, so in our registry, we essentially have in our database we have a set of information linking which resources a particular registry hold. What we are doing is putting those resources on a digital X509 certificate giving you validated proof of holdership. So you can make statements, you can make claims using that certificate. So ask the holder of this certificate, I make claim X or Y. This certificate is renewed every 12 months and linked to the membership. Why? Because that gives us the information to know you really are what you say you are. We have contact with our members at least every 12 months. What we did is gave the certificate a lifetime of 18 months and we renew it automatically every 12 months.

So, ultimately, it's just an enhancement of our registry.

And this is how the structure is put together. We have a route certificate on X509 certificate holding all of the resources that were handed to the RIPE NCC by the IANA. From this route certificate we can issue a trial certificate to a member listing all of the resources they hold, running their own certificate authority. And these LARs, they can also issue trial certificates to their customers giving the customer a certificate of authority with the resources that they are using. So this is the tree that is being built. And using these certificates, you do something like BGP origin validation.

So there are two ways practically of doing this: You can download an open first package, generate an X509 certificate yourself and connect to the parent system that the RIPE NCC runs and get your ?? get your resources pushed on to your certificate. So you are in complete control of the certificate and the only thing that we are locking down essentially is we ?? which resources are listed on it, running something like this yourself is kind of a considerable investment that people have to do, especially at this stage of the game. Because it's not like you get a lot of returns so you can run your own certificate authority and install the software, but actually using that on a day?to?day basis and maintaining that can be a little bit more work than people are willing to put in at this stage of the game. So that is also why we provide a hosted system in which all processes are secured and and it sits within our member LIR portal, you log in with your credentials and you can generate a certificate, all of the key roll overs are done and essentially the only thing you have to worry about is maintaining your data. None of the crypto work is something you have to worry about. So we provide a web web user interface to do all of this. Using the LIR portal or the software that you run yourself, you can create route origin authorisations which essentially a statement I authorise these autonomous systems to originate these of my prefixes.

Only the legitimate holder of the address space can create a valid ROA. That is the power of the system. And this is something that a lot of people forget. Because ultimately, a ROA says something about a route announcement. So a valid ROA is merely something that is cryptographically valid, you can trace it back to the parent certificate. But what it says about your route announcements is that it's either valid, invalid or unknown. So a valid route announcement, so RPKI valid in this meaning of the word, means yes, I found a ROA, this route announcement was authorised by the legitimate holder of the address space. If a route announcement is invalid it actually means well, yes, a statement was made about this particular prefix, but it didn't actually authorise that AS, it authorised another AS, so this is a hijack, it's being originated from an invalid original inand unknown is essentially no statement has been made about this route announcement, which is currently the vast majority of all of the route announcements in the system.

So we launched this on the 1st of January 2011 and I was under the impression that this would rival the adoption of DNSSEC and IPv6, so we'd have about 20 people enabling their CA by the end of the year but then this happened, we are running up to almost 1,000 LARs who said yes, I would like a certificate, give me validated proof of holdership, and this BGP origin thing, I am going to have a look at that, too. So 700 ROAs, over 700 have now been created, describing more than 1,700 BGP prefixes. And those prefixes, it ranges from /10s to /24s, so ultimately the total number of address space that is covered is roughly in the area of about three /8s worth of data that sits in the system.

Six months ago when I did this presentation, actually people are making a lot of mistakes; people are trying out this system and data quality is pretty bad. The amount of invalid BGP announcement that were created actually exceeds the amount of Val /EUDZ so data quality was really, really poor so nobody in their right mind would basically routing decision on this data. So we have done a lot of work and the other /RA*RS have done a lot of work in education and contacting people and explaining to them where they made mistakes and currently, in the system, there are over 7,000 BGP announcements that are RPKI valid and the number of invalid for globally visible prefixes is less than 500.

So, it went from 50/50, roughly, to 14 to 1 valid/invalid and those invalids they could very well be invalid, they are supposed to be and supposed to be seen as hijacks so data quality has massively improved over the last six months.

So using this data, actually basing a routing decision on information like this, it's all in your hands, and this is what I would like to stress the most; I mean, in the beginning, I talked about the concern of law enforcement or some other body giving ?? presenting us with a court order saying you have to change something in the certificate tree, but that doesn't mean that you automatically have to accept what is being said in the RPKI data set. Ultimately, using the data that is available to you is your choice. You are in the driver's seat here. You have three variables to play with: A BGP route announcement can be valid, invalid or unknown. That is it. What you do with that data, your choice. You can prefer invalid over valid for all I care. It's your choice what do you with it. You can create overrides locally, you can do that. You are in total control. But we want to make sure that anything that is being tampered with, anything that happens within the system, any meddling, sticks out like a sore thumb so you can be sure that whatever you do, that you are basing information on something that is reliable, something that is accurate.

So, we have taken a couple of steps in the RPKI validation software that the RIPE NCC also provides. First off, when you set up an RPKI validator and you start importing data from the entire RPKI tree, you can essentially choose to rely on any Trust Anchor you like. So you can configure a RIPE NCC as a Trust Anchor and get all of the data that sits under that tree and also for the other four RIRs but if would you like to add an additional Trust Anchor with additional information you would like to trust. You are free to do that. That is your choice. We have also built white?listing and ignore filters into the RPKI validator. This is something you can do on your router directly but to make it sort of convenient for you, we have built this in the RPKI validator, so if you think, as an operator, there is no ROA in the RPKI system for a particular BGP route announcement but I think there should be one, then you can create one locally within the validation software and it will integrate into the rest of your work flow. If you have the feeling that a certain prefix is not trusted, may be tampered with or you think you know better and you don't really want to rely on the RPKI system for it you can always choose to ignore it and say for this prefix I don't really care what is being said in the RPKI system; I want to create my own local policy for it. And this is how that works within RPKI validator interface. You can just say OK for AS 21, 21, I think there should be a ROA, there is known in the global RPKI system but I am just going to create one locally and you can immediately see within the interface which route announcement that now validates, which route announcement is affected.

In the ignore filter you can say, well, whatever address space, whatever RPKI ROAs that are being made for 193 /19 which is the RIPE NCC address space I just want to you ignore it, just pretend that it doesn't exist, right? Now, in giving you more operator control, we thought maybe we could expand on this idea, because if it turns out that word gets around on mailing lists or on IRC or somewhere within the community, I think these prefixes belonging to this registry, they may actually be tampered with and meddled with, I am not trusting this, something is wrong. If words gets out, then in a fully deployed RPKI world, that would mean that everybody running this RPKI validator software would have to go in and manually enter a prefix and say, I am going to ignore this because I don't really trust this any more. So if 8,000 members have to do something manly, then the chances of that actually scaling and actually working is pretty slim. So we were thinking of a system to ultimate this. We could say OK, if what we have independent monitors maybe run by network operator groups who keep track of this and of something going wrong within the RPKI data set, something maybe that has been tampered with so they can publish a list with prefix and RPKI validator could import this information and if one or more or two or more monitors would say I don't think this data should be trusted, then it can be automatically ignored within the RPKI validator or it can trigger an alert. Maybe you should have a look into this.

So how would that look. We made a mark?up. So the ignore filter field would stay the same. You could at all times enter a prefix and say OK I am not going to trust this. I just don't want to rely on any RPKI data but you could also have a list of external monitors where you import data that they publish about things that shouldn't be relied on by the RPKI system. And you can say if two or three or however paranoid you are, or more, monitors say, this shouldn't be trusted, then it flags an alert or takes action.

So, some things you have to consider. If you start working with external monitors, you essentially also lay your trust somewhere else so these ?? if would you need ?? start using these kind of parties, you would need to have an open process, they would need to be transparent and impartial, community?driven and that is why I think network operator groups would be pretty good for that, and it would be even better if they are outside RIPE NCC jurisdiction, for example, so law enforcement doesn't necessarily have any control over them. It also creates an impossible attack factor, everybody is concerned about security for the RPKI system itself, but if you start working with external monitors then that would also create a potential attack factor. And we would have to build a prototype for something like this to see how we could get it working. It could be pretty simple, people could just publish a list with a text file with a list of prefixes in it, we could import it and tad to an ignore filter. We would like to explore this with the prototype.

So the other factors, this was operator autonomy ?? the other factors I would like to focus on are security and resilience as well as ultimately the service expansion that we simply need to do.

Currently, in the LIR portal the security we have is user name and password and in addition we have X509 identity certificates so you can install an X509 identity certificate in your web browser and use that as additional authentication for the LIR portal to access the system. Now, relying on a a browser in X509 technology to do that is a bit of hit?and?miss; it's not the most secure thing. We would like to have some other sort of two factor authentication but we immediate to take into account this is something that needs to work for more than 8,000 members in 72 countries. We do do something like SMS with a code that you enter into the LIR Portal as an additional means of security, but I would like suggestion also from you to hear what you would like to strengthen the security that we have, next to simply having a user name and password.

Also we would like to have audits to make sure that we can prove by the independent party can prove that the certificate authority is running properly, hasn't been tampered with is properly secured and maybe even get certificate authority accreditation, that would be good.

Then, in terms of resilience, currently if you look at our ROA repository that holds all of the RPKI data, essentially it's just, well, two servers sitting behind one load balancer on one DNS entry with one IP address doesn't really scale if we look to having two ?? we need some way of skilling this up and we are looking into various methods in tackling that particular issue. So this goes for several aspects of it, the hosted software being ?? making sure the LIR Portal is available at all times from all places in the world because you need to be able to manage your ROAs, making sure that it is resistant to attack, like a denial of service attack, the non?hosted system, so if you are running your own RPKI software you need too be able to talk to the parent system. If that channel is being ?? is under attack then essentially you can't do anything, either. And we have to make sure that you are always capable of getting data.

And then lastly, what we would like to focus on for the rest of the year is incrementally add all of the other address space. We launched this space on the 1st of January 2011 we said we can only going it do provider aggregatable space and nothing else, we want to gain operational experience with this and have a look at how this pans out in the future and tackle more complex types of address space a little bit later on and we have sort of slowly progressed into doing that. For example, provider independent space that is marked as infrastructure is now also eligible for certification. Anything that is allocated unspecified is eligible for certification and very slowly we will add this. For example, PI end users, they would also need some sort of way to access the RPKI system, so we need to give them access to the management interface so they can use the system, as well.

Etc., etc.

So ultimately, this is the route map that we would propose for resource certification for the next year, running into Q 12013. What we have currently been working on is providing a test system, so a sandbox, we give you sharp tools to play with and a lot of rope to hang yourself w? you have proven that you are capable of creating thousands upon thousands of invalid route announcements so maybe it would be good idea if you could have a sandbox where you could play around and nothing would affect the production data set but you could see the effects of what would happen in real life if you create ROAs like that. And that is something that is currently available.

Also, we made some improvements to our validator in terms of stability and interoperability with router communication because they are also capable of talking to Cisco and Juniper routers who support RPKI, that is now also properly working. And lastly, we are currently working on a better user interface because the current one for managing ROAs has proven to be pretty error prone and we have to do a lot of education and talking to members to make sure that it creates or fix their ROAs and create proper ROAs and we would like them to do the right thing in one go. In Q 2 and 3, strengthen operator autonomy, so build a prototype for this monitoring infrastructure and Q3, Q4 work on security, so work on two factor authentication, make sure we have some sort of auditing set up periodically and running into Q4, Q1, 2013 we want to start working on resilience because we feel we are going to hit the boundaries of scaling this system up, that really running it on a single IP and DNS entry is not going to suffice any more, and gradually over the course of all of this time we are going to add different blocks of address space that are eligible to the system.

And I would really, really like your feedback on this. Information and announcements, follow the RPKI hash tag on Twitter. And go to ripe.net certification for more information, and I am happy to take your comments or questions. Thank you.

CHAIR: Any questions? So please state your name and organisation and make your questions brief and intelligible to the audience.

AUDIENCE SPEAKER: From vie come telecommunications Switzerland. My question is, well you stated the legal problem about the Dutch justice being able to actually request something to RIPE. And maybe some LIR is in a country doing something that is legal in his country but maybe not legal under Dutch law, and the Dutch judge may think that you are actually helping him. So, I was thinking maybe, the second point I would like to point is you said the LIRs are free to choose or to do whatever they want. This is also true with spam black lists but doesn't work in reality because the affected one has no control on how the other receive mail. So if you get blacklisted, you are basically done, it's very difficult to handle these problems. So I was thinking and maybe this has been discussed before, why not distribute the source of the tree among all the regional Internet registries and as long as you have majority of positive answers, it counts as positives, as long as three regional registries say yes, this is valid, then you consider it valid.

ALEX BAND: Yes. This is definitely something that we have explored and the current proposal for having an external monitor is essentially at this stage of the game, a lot of solutions that people have come up with, sort of extrapolate out to a fully deployed RPKI implementation, so that everyone would be using it and all of the data and all of the route announcements would be described. Setting up such a system, so having mirrors of all the certificate trees or setting up an alternative certificate tree for all of the exceptions or something that is being disputed, is tremendously complex to build, especially with the amount of resources that we have, and we feel, as the RIPE NCC currently, that is maybe not an investment that is sensible at this stage of the game. So we are definitely considering this, but we think we'd be better off with some low hanging fruit right now, do some simpler steps right now and work towards more complex solutions if it really proves that this is technology that takes off and is widely adopted because that is not something that we can assume right now. The solution that you propose would take an awful lot of coding and resources and ultimately we are not really sure if it will pay off in the end. Trying to be sensible.

Martin: Martin Levy Hurricane Electric. You had quite a few slides back one line that you glossed over about choice of route.

ALEX BAND: Did I say that?

MARTIN LEVY: Yes.

ALEX BAND: Was it here?

MARTIN LEVY: No, back one. No. You mentioned it and you glossed over it so quickly and this is a really key point; it's a subtle different question than was previously asked, but not getting into legal conversation, simple choice is, there is only one choice for all of this tree to sit on the ?? presently the RIPE single certificate, the route certificate from RIPE, we will ignore anything above RIPE for the moment. Does this stay with that choice of that single certificate? Because if so, the validation argument and everything that you have talked about, this is got some operational pluses in everything you have talked about here, but that doesn't change the law, the issue of control of legality within inside Holland versus anywhere else and to do with potentially operational hiccup within what the contents that have particular certificate is and the list of resources in that certificate. So did you actually really mention that the ?? that the LIRs had a choice of route at this point, and if so, can you expand upon that? I don't think you did say that but it was ?? it was your in your language.

ALEX BAND: No and I don't think I meant that, either. So the certificate tree is an accurate reflection of registry data. If we were presented with a court order to tamper with registry data we would fight that with everything within our means. Yes.

MARTIN LEVY: I accept that statement except ?? continue.

ALEX BAND: Ultimately, the only infrastructure that we are providing is for a RIPE NCC member to make a validatable statement using their resource certificates as source for the claim.

MARTIN LEVY: I appreciate all at that but this question is: Does an LIR at any point in time have a choice of what route they use for their certificate? What their parent is going to be? Is it always going to be the RIPE parent.

ALEX BAND: Yes.

MARTIN LEVY: OK.

ALEX BAND: The authority /THAOS more or less start somewhere, it all traces back to the RIPE NCC route certificate and us, being the authoritative source who is the holder of which block of address space.

MARTIN LEVY: I want to make sure you stated that because that is a very ?? that hasn't changed, it can't change but you said something that made it sound like ??

ALEX BAND: I know what I said, I think I know what you mean now. The point is that in the RPKI validator software, and in any other validator software, you configure Trust Anchors. It is as an operator your choice to choose which certificate authority you are going to trust. If somebody else starts making statements saying, I know who the legitimate holder of these addresses are and I think I know better than the RIPE NCC, then of course as an operator you are free to accept that, or choose that as a Trust Anchor.

MARTIN LEVY: For the record then, you are stating the options that you can provide for your software base, your validator, which is by the way something we use at AG, but that is not ultimately what everybody will use; there are many other ?? this is based on a protocol defined in the, by IETF and therefore anybody can write software against that protocol.

ALEX BAND: Yes.

MARTIN LEVY: To make sure it's said publically, I could Sam take some relarge operator in Europe so who says to me I want to run with my own route and that is the way it is and they may have enough clout to say to me I will install that route on whatever software I have, that is a commercial clout statement, not anything else, so I just want make to sure now I understand what you were saying and it's about your ?? installing certificates on your validator which of course could be done on any other validator or route platform.

ALEX BAND: Exactly.

MARTIN LEVY: Now I understand.

RUEDIGER VOLK: Dutch telecom. Let me first throw some terminology at the previous discussion. I would very much encourage to distinguish talking about route certificates when you are talking about the CAs doing something as opposed to the Trust Anchors, that is what the relying parties configure as the route certificates they are pointing and selecting as using as the start of what they believe.

ALEX BAND: Yes, you are absolutely right. I mix and match.

RUEDIGER VOLK: I am sure I have seen, over the past few days, a couple of slides where Trust Anchor was said about the CA operation where it is really not a good idea.

ALEX BAND: Let me just skip to the route map slide. I am sure you would like to have that visible.

RUEDIGER VOLK: One simple question before I make it into something more complicated. For your route map, I actually would like to see a simple and early date for when will we be able to get certificates for all IPv6 resources? The IPv6 space should slot in too many complicated cases. It would be really lovely if we can get the coverage for IPv6 complete and clean at an early and defined date.

ALEX BAND: Duly noted and I will put that in gradually expanding the address space, I will put down top of the list.

RUEDIGER VOLK: Kind of define as a precise target for this.

ALEX BAND: Yes.

RUEDIGER VOLK: That is actually some of the route map is a little bit too wishy washy for me but, anyway, your request for feedback obviously needs some answer and more answer than is space for here at the mike.

ALEX BAND: Yes.

RUEDIGER VOLK: Well, one point I would like to make is when looking at the discussion of how monitoring and enable relying parties, meaning network operators, controlling what they are actually using, it is important to keep in mind that the party that is actually threatened by interference to the system, meaning the route CA, that may have law enforcement walk in or may have a drunken operator or, well, OK, what do we know what happens to the Executive Board one strange day. The mechanisms, the mechanisms to control this, quite obviously should not expect it to come exactly out from that body; it really needs active participation by parties that want to trust and want to control and well, at the moment, I am not really aware of activity of that kind and there are, I think, many possibilities within the design space of the RPKI and well, with the potential of confusing more of your audience within the RPKI there are many control points where the relying party could actually figure out that something unexpectedly changed and some twisting of knobs might be applicable for fixing things which, by the way, is also a nicer and more well?defined system then let's look in the DNSSEC where we can twist things and where the CAs and the keys and so on, are.

So much more me now.

RANDY BUSH: Excuse me from taking time from the next speaker. DNS RPKI etc., they suffer from one common problem: They have to attest to who owns this space. That is the allocation hierarchy. It forces a hierarchy upon us, hierarchies are not good things in the Internet. But if you want to fix it, you are not going to fix it with technology; go back and change the way addresses are allocated. And that will obviate it.

CHAIR: Thank you. Any other questions. Let's thank Alex Band, please.

(Applause)

RANDY BUSH: I am going to try and stay at Layer 7 and below. We will see how successful I am. How do these turkeys work is ?? so, you know, I run a certificate engine, I publish certificates into the database, all that kind of stuff. You have all heard these stories before so I am going to go fairly quickly, right? You have got to be bored with this by now.

So there is a nice little gooey so I can create ROAs and when I do so it will tell me if you create this ROA there is this prefix in the routing table today that you are going to make inveiled and all those wonderful things. And so the IANA, maybe some year, will have APNIC and RIPE etc. Below it and IIJ will be below that and we all publish our pieces of this stuff and these are the issuing parties, they are issuing certificates, ROAs, GHOST buster records, etc.. I wish to differentiate them from the relying parties who, using a Trust Anchor or a set of Trust Anchors, gather the data from the distributed database into a validated cache, and what Ruediger was trying to say before about the DNS approach; you are not going to get a fully validated cache, it's not deterministic, and in this case it is deterministic and from this validated cache you might create pseudoRIR data ?? IRR data, you might have not tools to say who owns some prefix but the part I am interested in bag router geek is putting it inside a router.

What happens when it's inside a router is what is interesting to me. So you have got this cache and you have got this protocol to put stuff inside the router and you have got BGP data and they are being compared, but what happens is, that the comparison merely, as Alex pointed out, marks the data as this BGP announcement is valid as compared to the RPKI data, invalid or not found, and I am going into disgusting detail about that.

Valid is a matching or covering, and they are not called ROAs because inside the router you do not have a ROA; all the crypto group is gone. All you have got is prefix, prefix length, maximum length and AS number. So we call them validated, ROA prefix or something like that, I don't know. It's in the next slide, I think.

So, you have either for valid, you have a matching covering VRP and it was found within the AS was right. Invalid is there is a covering VRP but the AS was different and there was nobody that did match, not found is there was no matching or covering VRP, same as today.

The operator tests the match and then applies local policy. So, what are the matching rules? It's a validated ROA payload. There I go, I knew it meant something or other.

So, if we have a BGP announcement of 98128 /16, a VRP of 98128 prefix length 12, longest match 16, covers, because 12 is shorter than 16, right? In this, by the way, cover does not care about the maximum length; the question is, does this prefix cover that prefix? And we are used to thinking this way, so this slide is boring. A /16 covers a /20 does not cover, it's too long ().

Next. That was next, I guess. Yes. So if I add that 16 announcement to AS 42, if a VRP says I lead a 12 to 16, so this 16 is within this range and the AS matches. So this is covering and matching. Here, it's covering 16 to 24, yes, 16 is within the 16 through 24, the AS numbers are wrong. That is invalid. So, here we have got 20 to 24, oops, 20 to 24 is not in this range; this ROA does not cover. So it won't even be applied to this so this would be an unknown. To get a little more formal about it, and I am just inserting these almost as a joke, this says the same thing as the previous slides: If VRP covers this route announcement, it's the intersection and the length is less than the other length and there is the match. You really ?? here is a table version of it. Here is a more formal notation, have you had a enough of that stuff?

AUDIENCE SPEAKER: Yes. Bush I had to do it. So anyway, matching invalidation, here we have two VRPs, 16 through 24 from one AS; here we have ?? and the reason I am going through all this boring stuff I keep getting people who don't understand and we make mistakes. So, here is a 12 announcement for 42. 12 isn't covered in these ranges, so it's not found because there is no covering VRP. Here is a 16 for AS 42, both of these cover, 42 this one matches. So there is a match with VRP 1, this is valid. Similarly, here is a 20, it matches VRP 0, it also matches VRP ?? it's covered by VRP 1, it doesn't match because the AS is different. But this one saves it. And we go on.

So now, I want to go into interesting weird things on the way to the store. In the spec, it says that a VRP for AS0, Geoff snuck this in late in the process, a VRP for AS0 says that nobody can match it and, therefore, it's forced invalid. But what happens if there is a VRP for AS0 and yet there is another VRP which does match the announcement is matched by that second VRP and to go back to what Alex was say, this is hidden in there. I think I can go further.

A router ?? the actual router implementations today reject announcement for AS0. So, if you get an AS0 marking, you will mark it as invalid as long as there is no other matching VRP. But think of the case, excuse me for a short incursion into late 9, we were worried about the Dutch court saying that RIPE is going to issue an AS0 VRP for my prefix because I have got a bad song on my website but there is another Trust Anchor that could you use that is what I would think as Alex would refer to it as monitoring trust anchoring, that is watching out for us all, could you also stick in that would publish and say, hey, this is where it used to be, you are safe. One of the two VRPs matched. So the fact that somebody attacked through the court, can be handled. At least in the protocol.

If your policy ?? here is an interesting one ?? if your policy accepts invalid, you down?pref it very far, so therefore the prefix would be accepted even if it's marked as invalid. If it's the only candidate for the more specific prefix, then it's going to be used, right? So you add an invalid prefix, it was the only thing out there, it's the only candidate, it's going to be used, something you know is wrong. So maybe you don't want to accept invalids. Even at a low pref.

Ah, so we had a great argument, you will notice for those who followed IETF garbage that a new copy of prefix validate was issued today, resolving this
Question: Should updates learn via iBGP be marked? The old spec said no. So if I had an iBGP injection into my AS, like I have five routers and one of them injected a prefix, I would have an /?PB valid prefix in my AS, originating in my AS, I would announce it to my neighbour, they would know it's invalid and I wouldn't, that was broken, I really didn't like it and we had a two?month long fight about it, so the current situation is: Invalids ?? announcements will be marked whether they come via iBGP or whether they are injected into BGP from a static, what we normally call a nailup, in both cases they will be marked. So you will know that what you are saying is badness.

Here is a fun one: A friend of mine at a large telco I used to work has a /8, and he says, I'd like to protect that /8, so I want to issue a ROA for it, but I have got hundreds of BGP speakers that have little bits of my space, hundreds of them, so how long could I have to wait for them all to get registered? I want to announce my /8 and not have their prefixes marked as invalid even though they don't come from my AS. So he wanted to signal that punching a hole in his ten /8 was allowed, it was actually 12 /8, we all know who he is.

So, I said, well, if you really want the hack we could signal it by, for instance, putting a max Lynn of zero and that would say, OK. If you have these announcements, 10/8, zero, 10/90 and 10/90 for these ASs, then these ?? your announcements would all be marked valid and people punching holes in yours, like 666, would be marked as not found, but if somebody tried to steal your 8, they'd get caught, steal your 9, they'd get caught, steal this 9, they'd get caught. Do you really want that, Jay? And Jay looked at it and finally said "no." Cute hack but no. Instead it's pretty useless, generate temporary customary table from the ROA table, generate the ROAs for your customers, fake it up and then everybody is protected, and then go forward from there.

And I think I am getting you to lunch on time.

(Applause)

CHAIR: Thank you, Randy.

BRIAN NISBET: Just to delay a little bit longer. Does anybody have any questions or are you all very, very hungry? It looks like hunger is outweighing questions, so thank you very much.

CHAIR: Is Joseph Gersch still here in the room? So, we had a very nice discussion earlier on during Joseph's presentation, so those that are ?? would like to continue that discussion, please come over here and you can continue that discussion. I think Geoff, Randy, you are both very interested, and I think there are a couple of people that stood up wanted to have questions but changed their minds. So thank you.

BRIAN NISBET: So, yes, so lunch now, we will be back in an hour?and?a?half for more plenary fun and please remember to mark your ?? all the talks. Thanks very much.