Archives

These are unedited transcripts and may contain errors.

Plenary Session: 17 April 2012, 2:00?3:30 pm.

CHAIR: We are back to the Plenary Session for this session ?? we have one announcement to do. The network was broken in the morning. As we heard, the NTP server confused the DHTP server and things went berserk. It this has nothing to do with the fact that we are still in Honolulu, as I hear, but we are slowly coming back. So now the network should be up and running and working and we can continue with the Plenary Session. This is more IPv6 orientated Plenary Session and Sander will tell you who is the first speaker.

SANDER STEFFAN: Welcome, our first speaker is Geoff Huston. You probably all know him because of his analysis of v4 stuff and numbers and statistics. And I think today he will actually move on beyond this legacy stuff and talk about ?? and compare it to v6.

GEOFF HUSTON: Hi, and good afternoon again. This is about work I do. How many of you have used Google as a search engine? Of course you have. Have you noticed that the more you type into a query, the worse the answer? Isn't that weird? Isn't that weird, because as humans, the more I try to explain what I'm on about, the more you understand. But it seems that Google isn't human. That if I type in just the right two words I get the answer that I'm looking for, but if I say more, it gets worse.

Now, take that thought and think about what happens when you take a world of one protocol, IPv4, and add another? Surely that means there are two ways of getting there, and surely, you would think, that with two protocols, life well better.

Let's take that thought and see how the operating system manufacturers and browser guys have managed to stuff it up.

So this is actually all about work that George Michaelson and I have done about trying to understand what browsers do when you give them way too much choice.

So, what we are trying to understand is, for many years v6 wasn't really there. Therefore, this strategy worked, because almost nothing had a AAAA record, almost nothing and almost none of of you had v6 enabled. So inside Windows, inside Mac systems, inside your browsers was actually this algorithm here that whenever you went somewhere, typed in an URL, it would ask for both the v4 and the v6 records and it would wait for the A and the AAAA answers to come back. If it found a v6 answer, which it never did because of none of you guys ever turned it on, if it ever did that, it would actually, and it had a local v6 interface, it would actually try that first. And then it would try v4. And because none of you turned on v6, none of you noticed, but those of you who did and tried it, gave up. Why? Because things became not just slow, not just glacially slow but if you were unfortunate enough to have something that ran on Linux, this is geologically slow, because it waited for up to three minutes for the v6 not to respond before backing off. And even in Windows, 3 SYN packets, 19 seconds, your Mac was crap. 75 seconds. Obviously the more folk who turned on v6, the more people they pissed off, because waiting for 75 seconds for something not to work is not a very user friendly idea. And the major problem was, was actually inside Windows. Because Windows had two auto?tunnelling techniques, 6 to 4 and Teredo, and as you will see they really, really suck and preferring to use them over v4 was a terrible idea. So about a year or two ago when we started to start using v6, all of a sudden this strategy was obviously bad. So then we decided strategy 2: We'll have a preference table, so inside Windows Vista, inside the more recent Mac systems and inside your browsers you now have this sort of cascading preference. If there is v6 and it's native, it's direct, there is no auto?tunnelling, try that first. If you can't do that, use v4. And if whatever you are going to is v6 only and all you have got is auto?tunnelling, okay then use it, so it kind of turns off auto?tunnelling, it kind of turns it off.

And what happens when you fail? Well nothing changed. There are still two protocols there, but the operating system takes forever to go and figure out what's going wrong. Someone said it took them up to 90 minutes to load a page coming from one of the dual?stack service for ARIN because there were so many different components and each component had a 180?second wait. It just sucked. And that strategy leads your users to a life of abject misery. No wonder no one liked v6, because as soon as you turned it on, the experience was broken.

So, Microsoft, because Teredo really, really, really is bad, took this back and it came up with a new theory, that if all you had was Teredo, and for anyone who is running Windows, as soon as you leave this room and go to a normal place where it doesn't have 6, that's all you have, you still have v6, but it's Teredo, oddly enough you never use it, because if all you have is Teredo, the operating system goes no, no, no, I'm not going to use that, it doesn't query the DNS. So, Teredo is only used when there is no DNS lookup. What application doesn't use the DNS? BitTorrent. Your use distributed hash trees. What is full of Teredo? BitTorrent. Same reason. So anyway, you go into this new behaviour you get rid of Teredo, so if you are behind in that and you don't have v6, you actually effectively turn it off.

This is stupid. It's still just doesn't work. This idea of doing things one after another goes against everything we have ever learned about computers, computation, communications and speed. The way we make things fast is to do things in parallel, and understanding that you are going to flip based on a time?out is about the same as banging the rocks together, it's just stupid.

So, you know, we have this new idea. We need to fail with more style. We need a better form of failure than the one we are using at the moment. So, you know, how do you determine your failing? By the way, why did we do this? Why did we have such amazingly long time?outs, three minutes? How far can a packet go in three minutes? I think it's to mercury and back is it not? It only takes eight minutes to get to the sun, damn it. So a three?minute wait, what's the packet doing? The reason why we had such long time?outs was in a uni?protocol world, if that was the only protocol you had, if you can't connect, you have got to fail. So they are sitting there, oh, if I am going to fail, I might as well wait for three minutes just in case something weird happens in the interim. So these operating systems have these time?outs because of conditions 20 years ago. This failure code is the failure code of BSD dating back to 1982, and no one ever looked at it again. So, when we put in two protocols, we took two protocols with 1982 code. That sucks. In fact, it sucks as badly as doing a two?horse race by sending off the first horse and, if it dies, you only then send off the second horse. We don't do this. You know, if you actually want a two?horse race, start them off together. This is the way to actually do it and whichever one wins, wins, you know, that's the formal way of doing it. All of a sudden we twigged, hang on a second, we have a computer. Oh, you can do two things at once... oh, really? And we came up with with this idea of happy eyeballs, but of course, everyone has a different interpretation. And there is no such thing as a true standard. So, the first one I want to look at it Safari and Mac OS. Safari and Mac OS are now, if you running 10.7, pretty cluey, because now, Mac OS is keeping an RTT of everywhere you have ever been in TCP recently. It's a little cache, so every time you go to a site, whatever, it goes that was 30 milliseconds away, that was 150 milliseconds away. And than it goes, what about the RTT of somewhere you have never been to before? It goes, well, I don't know is the wrong answer. Tell you what, it's the average of all the others you have been to. So, when you get a AAAA and an A record back, you don't prefer v6, you don't prefer v4. You can do this right now and some of you will go one way and some of you will go another way because it's based on your local machines current running average of RTTs for each protocol. Cool, this is starting to get really snappy. I connect using what I think is the fastest protocol and I wait for precisely one round trip time, as quick as 30 milliseconds. If I don't get an answer, start up the other protocol straight away. Instead of waiting 75 seconds, loser, you are now waiting 30 milliseconds, oh my God, this is really good, nice idea. Not quite.

The problem is it doesn't quite work like that. How many people use multiple addresses behind a single server name? Google has about three, right? Etc., etc., etc.. do you think it makes you more resilient? More robust? You are meant to nod because that's why you did it. Otherwise why the hell did you do this stuff? You put in multiple addresses to make the system more resilient, ah had an it goes, you have given me five v6 addresses, before I revert to v4, I am going to try all five, serially because I don't understand parallelism really. It sort worked but the real lesson is, in this dual?stack world, and this is a bizarre lesson, putting up multiple v6 addresses, or even v4 addresses, will actually make the user experience for this particular configuration worse. If things don't quite work properly. Which is a really surprising outcome. So either fix yourself or write to Mac OS people and tell them to fix their TCP stack, but you do have to change things. It's not quite right.

Chrome: Although Chrome sits on Chromium as an operating system, Chrome is also an operating system independent. So, unlike Safari, it can't reach down and find an RTT estimate because Windows doesn't do it, Linux doesn't do it. So, how do they figure out what's fastest? What's the fastest protocol, v4 or v6? I know let's ask the DNS. How many people know about the DNS? Come on, all of you do. Does the speed of a DNS answer have anything to do with the round trip time to the server you are going to? No. What does Chromium do? Pick the fastest DNS answer. Why? I don't know. Bizarre. But anyway, it's kind of cute. The wait is now a third of a second. It's better than 75 seconds, I'll give you marks for that. But why anyone thinks that DNS resolution leads to RTT speed rocks in their head. I shouldn't really ?? maybe I should, rocks in their head: Wrong idea. The DNS doesn't determine RTT. So it's kind of happy?ish eyeballs, but we can do better. A third of a second is still way too long. How far can you go in 300 milliseconds? Across the Atlantic and back. Not good enough. Along comes Firefox. I kind of like it because this is actually really quite quite delightedly happy eyeballs. These guys are fast. Why? Because this do the DNS and as soon as they get back a DNS answer and each protocol they fire off a SYN straight away, whichever one comes in first works. It's quick. It is really quick. And it's true parallelism. Unfortunately, I'm not sure it's on by default. So go into Firefox, find your config sessions and set fast fail over. It's not fast fail over. It should have been called highly parallel delightedly happy eyeballs and you actually get a really, really quick dual?stack response.

So, I have been doing this in my spare time, took about a month, and I have actually tested out most popular operating systems in most browsers and there is the kind of table there of what happens. Is anyone here from Opera? Fix yourself. On Linux, on Opera, you are still on three minutes. Chromium is doing a lot better, Chrome 300 milliseconds. Firefox is really brilliant, it just fails straight away. Whoever works is quick. Windows and dear old Explorer: 20 seconds. It's been 20 seconds since 1903 or whatever. I wish they'd actually upgrade their stuff and maybe Windows 8 has something in it. I tested iOS, it's the same fundamental stack as Apple's ?? as Mac OS, but for some reason the failure is still 720 milliseconds. Why do they think they are special? These mobile guys need a life. Get back with the rest of us.

And then you sort of go, why are you doing this? What was wrong with the original model? Area we tinkering around? Is v6 really the same as v4? So I then started to get curious about this and I actually wanted to understand how fast is v6? How fast is v4? I know what I'm like. You know what you're like. But I want to measure everyone, I want to measure everyone in the room, I want to measure everyone. Unfortunately, Google won't share me their data. But what they will do is let me put an test inside an ad, and Google have worked with us and we are actually doing a subtle little ad that does a test. And every time you see an ad that says "We are measuring v6," don't click on it. Because someone has to pay, right. Just let it run. And we are now doing around 500,000 of these tests a day all over the planet, so, now with Google's help now, we are really getting some data about how well v6 and v4 actually work. And we are actually now able to gather measurements about quality and failure and I want to go through some of this.

So the first one is actually talking about failure. That's a real picture. Somewhere over there in India is these two bridges, right, and there is a very brave human walking across on a bridge that, quite frankly, is more not there than there, right? This bridge, the path back is broken. What's the photographer standing on? Or are they falling, and this is the last picture they ever took? There is nothing down there. I love this photo. But that's the kind of issue that we are finding. With an awful lot of connections, the outbound SYN goes to the server, me, I send back a SYN?ACK but never makes it. You know, buried somewhere in the river.

How bad are things? So, this is a graph from here, 0 percent to 80%, this is the connection failure rate for v6. This is the connection failure rate for v4. Interestingly, neither of them is 0, but the connection failure rate for v6 is kind of frightening, 4 out of 10 fail. We'll come back to that. But let's just figure out what's happening in v4. Why is the failure rate for v4 so high? What's going on?

There is an awful lot of malware out there, yeah? About 3 gigabits per second of malware and somewhere this malware probes you with a SYN on port 80 just to see if you SYN?ACK. I have a port 80 server, I get malware probing me. That's what I see. They never want the http connection; they just want to see if I'll answer. What I'm actually measuring in v4 with this sort of residual connection failure is nothing to do with connection failure, it's just malware noise. We understand what 4 is doing. I am also subject a huge SYN flood attacks, thank you whoever sent them. I know what you are and I'll come and get you later.

V6, however, that's interesting. I stuffed a few things up because I am running a Teredo relay inside so some of it is my fault. But 40 percent failure, that sucks. So then I started to look at what's failing? And interestingly, because I use a particular technique that makes Teredo work, a lot of you are running Windows with Teredo. The failure rate for Teredo is really high. Some of you run 6to4, you may not know it or not and the failure rate for 6to4 it pretty high. And some of you are doing Unicast.

Teredo, two?step connection process. You do an ICMP exchange and SYN exchange. Both things don't work. There is nothing wrong with the protocol will. Teredo is a perfectly fine protocol. It's meant to go through NATs. 4 out of 10 times it doesn't work. Who is going to deploy CGNs in the next year? What is the expected failure rate that you are going to see for connections through your CGN? Good luck.

The story with NATs is really quite ugly. I must admit Teredo is trying to do three?party rendezvous, it's trying to to symmetric parts through a NAT and that's hard, but it doesn't work. So if you think with the CGN you can do anything more than port 80, think again. Because what we're seeing is a practical example of in the world today, a perfectly reasonable widely deployed protocol fails massively. No one cares.

So, okay, NAT traversal is really bad. If you are thinking of a CGN, you have got trouble. Interestingly enough, one protocol works brilliantly. One protocol says 40% failure rate ? don't give a stuff. What's that protocol? BitTorrent. Because it's so massively redundant, it can cope with this stuff and have it for breakfast.

6to4, a lot of people still have this. A fair deal of folk. It's a symmetric and there are a lot of performance problems with 6to4. It has a pretty high failure rate and, fascinatingly, it's different continent by continent. I have a server in the US: 20% failure rate; a server in Asia: around 5% failure rate; a server in Europe: around a 10% failure rate. There is something cultural about failure in 6to4.

Actually, what goes on is, 6to4 is automatic and you have fire wall functions in your CPE, your DSL modum and a lot of them are configured that incoming protocol 41 is evil. So, you can send out the SYN in 6to4, I send back the SYN?ACK, your fire wall goes no, no, no, protocol 41, and kills it. Now, you kind of go how do you fix that? You can't. 6to4 is dead technology. That failure rate of 10 to 20% is too high. So okay, that's 6to4, that's fine. This is the bit that interests me. Why do I get a failure rate of native mode 6to4 across the planet of not 1%, not 0.1%, but 1 in 20. 5% of connection attempts fail. So for three months, I have managed to get 110,761 [] uniquer tests on clients and if you come ?? done my ad twice I'll take the best. And, of those, 6,227 failed. Now, a few of you guys are obvious losers, whoever is trying to use FE 80 to go and reach the world, forget it. It will never work. Whoever is trying to use ULAs in the world, you are not meant to use these, stop that. Someone used 1FO2, that's multicast, is it not, whatever. Don't do this. The good news is that, six years later, we have finally killed 6 phone, 3 FFE did not appear. The obvious mistakes are there, I have still got 6,200 folk who failed.

Eight of you can't type in addresses. You just used a bad address for yourself and 66 of you were using unadvertised unicast, but that still left 6,100, and it seems to be you have got a problem with incoming v6. 6,000 is a lot. There are only 200 countries, so I can divide that 600 up by country. So the country that has the highest v6 failure rate at 18% is Spain. In Sweden, 10% of the connections failed. Over the boarder, just over the boarder in Norway, almost nothing fails. Swedes, talk to the Norwegians, get it together, okay. There are some strange answers in there. China is actually surprisingly good. There are massive differences country by country. I wish it was a larger set. I wish more of you were doing v6, but I am surprised there is that much variance. So very wide variance.

So that's the failure thing. The next kind of question in the time available ? I'm getting really close ? is, is 6 faster than 4 or not? Every client I test in 4 and in 6 I know the two addresses. I can compare your round trip time. So I have. And this is a graph, a distribution, it's a log scale, so be careful, on the left v6 is faster. On the right v6 is slower. The blue is Unicast, the green is 6to4, the red is Teredo. This is cool, wow, Teredo is clear. Teredo shouldn't be slower. Because I am running the Teredo relay. The path from me to the client is the same path as the v4 path. For some reason there is crap out there that makes Teredo slower even though the path is the same. I reckon that CPE based NATs are really, really, really poor performance units and if you start to stress them they go really, really slowly and it hurts. That's interesting.

Over in Europe, Teredo is faster for a certain amount of folk. Bizarre. Is Teredo faster or is v4 slower? Are you doing things to that v4 SYN in port 80 that you are not doing in annen caps laid packet? I don't know. But I am really surprised to see that. Australia is a very strange country. It's very strange for a number of reasons, but one of them is the connectivity is broken and bizarre. Look at the blue. This is Unicast v6, and it's both faster and slower simultaneously with a strange peak over there at 300 milliseconds, which is across the pacific twice. Very strange stuff. You notice in 6to4, this huge hump up there. That distribution is just weird but I suppose that's Australia for you. So yes, really sub optimal routes. US is a bit tighter, there are still folk using 6to4 and we had the relay on the other side of the continent. When we figured that out and redid it, interestingly now we are much better; the relay is in Chicago, we are in Europe, it kind of works. In Australia we are broken.

So in the time available, what can I say? Is 6 as fast as 4? Yes. Don't tunnel. Tunnelling is bad. 6to4 is evil. Teredo is evil. Don't do it. But if you are going Unicast and you are worried about all the 6 folk will be slower. No. Everything we see 6 is as fast as 4, we have got that right. Is it as robust? That's the bad news. No. That 5% failure rate is annoying. I don't think it's in the middle of the network. I don't think the middle of the network cares. I think it's still CPEs, that somehow and it strikes me as bizarre, you are actually able to get 6 to the client and it's able to send out packets, but incoming v6 pay load packets are still getting blocked, on average, 5% of the time in current consumer devices. No idea why. It would be good if that stopped.

So, it's not quite the same. Dual?stack still needs work. So how should browsers behave? Firefox, it's a paragraph relevant race to the SYN?ACK, that's good.
Chrome: Whatever is fasters in DNS, I'll just run with, so so, but still good. Or even Safari and Mac OS with their RTT estimate. They are all quick. I am quibbling a bit with Chrome just to sort of make fun with it, with you quite frankly, compared to three minutes, this stuff is fast. This is actually getting really good. But is it good enough?

I'd actually argue we have over achieved. We have really gone a bit too far. Most of you in the ISP business really see that the next few years are dual?stack, you can't simply turn off v4. And when RIPE runs out of v4 addresses on the 12th August this year, or maybe the 13th or maybe the 11th, who knows? When it returns out, around that day, you are going to have to do CGNs at some point for a while. But they are expensive. You are paying for them. You are not able to make the consumer pay. How big do you scale your CGN? Well, it needs to be as big as folk are using it. And the more browsers bang away at v4, the bigger your CGN needs to be.

So, what should a browser do to make sure you don't have to buy planet sized CGNs? I think we actually went a bit too far. I would actually argue back to the browser and operating system folk, give us a break. Let us use CGNs as last resort, rather than as a first resort. Fire off the v6 first. The initial view of dual?stack back in version 1, is actually the right view. Fire v6 off first but don't wait for 75 seconds. Don't wait for 189 seconds. If it doesn't work within 300 milliseconds or if it doesn't work within one RTT, fire off v4 as well. Rather than happy eyeballs, because I think that's just too much happiness, go for biased eyeballs but still pleasantly amused.

Thank you.

(Applause)

CHAIR: Thank you, Geoff. A really interesting presentation. Are there any questions? I see people going to the mikes. Who was the first one?

DAVID FREEDMAN: Claranet. Happier eyeballs in Firefox. The simultaneous connection stuff, I don't like it. I have to maintain much more stake when those sins arrive.

GEOFF HUSTON: Firefox is extravagant. I notice when collecting a GIF file, it opens up two ports connections. Parallelism is one thing, but don't overdo it. Yes, I think Firefox overachieved with with their fast fail over, I quite agree. Firefox, if you ever watch this in audio, change it, right, just prefer 6, okay.

ALEX: I have a question, do you have numbers on whether, for example, web servers that run dual?stack are equally well maintained for v4 and v6? So is it more likely, if I try to connect to a v6 web server that the address is down just because it's not included in a monitoring system or something like that?

GEOFF HUSTON: That's a very good question, and I don't measure that. Other folk measure servers. All the traffic for these ads, I have three servers with the assistance of the RIPE NCC, thank you, I have a server in Frankfurt with the assistance of ISC, I have a server in LA and I run a server in APNIC, in Brisbane. So I don't measure the servers; I measure the clients, yes, it's a good question, if someone wants to talk about dual?stack server performance, ask them.

AUDIENCE SPEAKER: I also have a question, so I'm not allowed to use this mike. You said that Teredo was the fastest one in your measurements. Do we have any data on how Teredo's relays behaved during the World IPv6 Day last year when they were a bit more loaded and more heavily used than when you did your measurements when they were not loaded?

GEOFF HUSTON: Actually, Teredo is not normally used because v6 day last year turned on v6 AAAA records in the DNS, and the DNS doesn't trigger Teredo. So, where we see Teredo is actually in BitTorrent. There is not an awful lot of Teredo unless you really tickle it, but I actually wanted to find out how much of the world has an active IPv6 stack even if it's not used? Half the world has one. So if you access providers and CPE providers ever actually did the job you were meant to do and turned on Unicast v6, one half of this Internet right away would run Unicast 6 and it's all waiting for you. Everything else is there. And I find this a bit frustrate that go Microsoft have done their job, Apple have done their job. The world is actually about as v6 prepared as it could possibly be and all that's missing is the last mile. So, I don't know why you are sitting here listening to me. Go off and do it. Thank you.

CHAIR: If there are not more questions, applause.

(Applause)

SANDER STEFFAN: Well, talking about people doing their jobs, last year we had a nice World IPv6 Day and now we have a presentation from Andrei about world IPv6 launch this year.

ANDREI ROBACHEVSKY: My name is Andrei Robachebsky. Thank you, Geoff, for a nice headway into this theme.

So, from World IPv6 Day that we had last year to world IPv6 launch and how it's different.

So, what's world IPv6 launch? Well, this time it's for real. On the 6th of June, 2012, leading operators, content providers will switch on IPv6 and leave it on, so IPv6 on default, and this is business as usual. IPv6 becomes part of regular business. It's not a day; it's a step function.

Well, I'll skip through this slide. I think I'll shall preaching to the converted with this slide, but, yeah, a lack of IPv6 deployment hinders connectivity and hinders innovation, IPv4 is a dead?end.

Well, where are we coming from? A brief recap of what happened last year. The World IPv6 Day was a 24?hour switch on test flight where Facebook, Google and Yahoo and 1,000 other websites turned on IPv6 on their websites for 24 hours. Some of them actually left IPv6 running after that. And the goals were to motivate service providers, hard way makers and operating system vendors to deploy IPv6, to deploy IPv6 in their devices and in their networks and also to understand the issues that we have in real life with a global scale deployment of IPv6.

Breaking chicken and egg problem, demonstrating that that can be done, that content can be made available on IPv6 was one of the main motivations. Improving IPv6 connectivity by, you know, doing this 24?hour test flight and fixing problems as we experience and observe them. Also, quite important, providing a target date for some of the deployments. We heard from many people that World IPv6 Day stimulated and fastened development and deployment of IPv6 in their companies and networks, and very important, that we sometimes forget is collaboration, because the Internet, at the end of the day, developed as part of collaboration of potential competitors.

Well, what happened last year? Well, basically nothing, and that was terriffic. Traffic numbers were low glowly because that was not really the goal of World IPv6 Day, but it was demonstrated where IPv6 is enabled as Geoff just said, everything is waiting on you, so, when IPv6 is switched on, it will be used.

So, what is going to happen?
On the 6th June, 2012, we'll have the world IPv6 launch. IPv6, on this day, becomes part of regular business. On by default, no special configuration, no special treatment. Who participates in this event? Access networks, how many router vendors, websites from around the world participating, and please join, here is the URL. You will be presented with a form, you have to fill it out and press 'send'.

Well, I have to mention that besides those categories, there are a lot of folks and a lot of companies collaborating and doing IPv6 to make it happen. Why are we doing this? Well, three things: First, acceleration, to accelerate already planned rollout.

Adoption: For those who haven't finalised their plans yet, that's a good opportunity to follow the letters.

And definition: Establishing a new baseline with IPv6 as business as usual.

Well, this time is different also because the focus is also on access networks and that is another tricky part of deployment of IPv6. Several big broadband providers, access networks, are joining world IPv6 launch. New subscribers get IPv6 on by default after 6th June. And 1% of visits to IPv6 enabled websites, on average, will be done by IPv6. So that's the criteria for access networks to be listed as participants of world IPv6 launch.

Well, 1%, it looks like a low number, a small number. But, in fact, because many of the things are outside of the control of broadband providers, that will require much bigger deployment footprint for them to achieve this 1% of real IPv6 connections. CPEs, home operating systems, some of them still not IPv6 capable, that causes this effect.

Home router vendors, well they are also a critical link in this whole game, because if they are not enabling IPv6, if they are not supporting IPv6 by default, if they require special tricky configuration, even if you are a broadband provider supports IPv6, your home network may not be able to connect through IPv6. A few of them participating and we are writing more of them to join world IPv6 launch.

And, of course, websites. Well, the leaders that participated in World IPv6 Day last year, they are joining the launch and for Facebook, Google, Microsoft, Bing and Yahoo, IPv6 will remain after 6th June 2012. And others are welcome to join and more than 1,000 already joined, so that's a good effort. And we shouldn't forget about CDNs; they are enabling their customers who ask for this.

IPv6 becomes, as said, part of regular business. It will be not switched off. It will remain there as a regular thing.

So, just to wrap?up. Joining the world IPv6 launch is a commitment to commercial grade IPv6.

It demonstrates that industry leaders are committed. The whole thing is aimed at encouraging additional commitment, follow the leaders. And it builds up and makes a significant step up from World IPv6 Day in 2011. So, we hope that this event will give further momentum to IPv6 deployment and will allow it to move into the future.

Thank you.

SANDER STEFFAN: Any questions?

AUDIENCE SPEAKER: I just want to jump in on the sentence you said about CDN providers ?? Lisa from Limelight ?? we promoted IPv6 last year during IPv6 day with certain customers that we motivated to turn on IPv6 for the content that we were hosting for them. It turns out that it was so successful that most of them actually kept it enabled ever since and we are thinking oh what can we do for this year's IPv6 launch since last year was already IPv6 launch for us, so we are not going to, as you said, we are not going to do it really upon request any more, and motivate people, but we are actually preparing to turn up new customers we have on the CDN by default with v6. We prepared all the needed tools for people to opt out, have the panic button and say something is already working for me last year, so everything is in place and we are working on training internally to be able to, in the theme of, it's for real this time, keep it going, all the way.

Thank you.

ANDREI ROBACHEVSKY: This is really great, awesome, this is truly in the spirit of world IPv6 launch. Thank you.

SANDER STEFFAN: I think this is a good message for everybody.

AUDIENCE SPEAKER: Just a quick point I want to make. On World IPv6 Day, we turned it on for a bunch of customer websites. We had a volume hosting platform. We turned it on for these customers who wanted to participate, and a few of them came back afterwards with the same message. They said that was absolutely great, it worked but we can't read our stats. What happened on the day? Because, they were using, like, a lot of them were uses some of these common stats packages, some of them were using Google by a lot of them were using this package that couldn't process it. They said that was really good but we just lost a day of stats.

SANDER STEFFAN: I think it's a good message that everybody should really take v6 serious now, even the stats providers. And I think it's a good message for everybody. Like, on the 6th of June, a lot of big companies in this business are doing v6 by default. So, yeah, please follow that lead and provide v6 by default to your customers.

ANDREI ROBACHEVSKY: And don't disable IPv6 on your laptops, please.

SANDER STEFFAN: Thank you.

(Applause)

CHAIR: And we come to our last presentation in this Plenary Session. It's Tore Anderson from Redprill Linpro. He is on a noble goal to enable IPv6 only in his data centre. It's a bit experimental, but I think he got more data in and documented, so, please, the stage is yours.

TORE ANDERSON: My name is Tore Anderson, and, when I'm not enjoying Ljubljana, I run data centres and stuff in them, amongst it the NAT work.

And it's a multi?talent data centre structure, kind of looks typically like this today, for most of our customers. They have a bunch of servers that do various things like database services and file services and all sorts of stuff, depending on what the customer is actually running, and there is usually some kind of a public front end that actually responds to all of the requests from the Internet, and that might be a load balance error, a type of cache server, or something like that.

And while the structure is like a multi?tenant data centre where every customer has their own set of VLANs, we are actually the ones that are operating the applications on the customers' servers, and handle all of the dependencies in between them, and so on, because we deliver the application servers.

And honestly, today, most of the customers are running on IPv4 only. Like this. There is absolutely no IPv6 anywhere. And I have to say that's great, it's a very comfortable place to be. It's the single stack to care about. There is no complexity related to the dual?stack thing, and if it was up to me, we would continue to do this forever, and everybody else would continue to do this forever, as well, and it would be great. However, there are some problems.

One problem is that the customers, or actually new customers, potential customers are starting to actually mention IPv6, and we see that especially large organisations such as governmental organisations, give us this big documents that we need to fill out, are suddenly starting to have a tick box: do you do v6 or not? And have you got one from the Swedish railway system, which had that tick box, and it was one of those tick boxes that, if you didn't tick it, you could just forget about sending in the entire tender. And that is a problem we need to solve because we were ?? obviously, we want their business.

Another problem is that we will run out of IPv4 addresses at some point. And currently, all the servers need an IPv4 address, and, depending on the customers, a large customer for us might have, say, 300 servers, and a small one would have one or two, but all of our new customers require their own set of of IPv4 addresses, and this should come as no surprise to you that, at some point, we won't be able to get those addresses, or, at least we don't want to bet on the fact that they will be available on the market for all future.

So, how do we actually start attacking those two separate problems? Well, the first one I mentioned about customers requesting IPv6 is the most urgent one because that's money tomorrow and everybody understands money tomorrow. And so, the easiest way to do that, the thing that requires the least amount of work is to stick some sort of a proxy or NAT system in front of all the servers, leave everything IPv4 only and just continue running everything on IPv4 and some translation in front. And this is an alternative that many people did on World IPv6 Day last year. It kind of of works. We get to tick our box. We get to actually provide, or appear to the Internet as though we have full dual?stack and everything is very nice. And as long as we have very few amounts of IPv6 users on the Internet, it's not really a problem either.

However, since we are actually fitting in the entire IPv6 address space into a small IPv6 prefix, this device that does the translation needs to be a stateful device. And when it's a stateful device, it needs to see traffic in both directions which has some kind of a constraint on where you can place it in the network, because you can't just use normal routing unless it's very, very close to the servers, and, also, it's a type of device that would, with increased load and increased session count and increased session initiation rate, will struggle more and more. So basically, it's a CGN in the data centre instead of the SPN network. So, for now, good work. For later, well, most of the traffic might be IPv6, it will not be very nice.

And, we have done nothing about the other problem, we are still running out of v4 addresses. So, forget about this one. Do the next logical step. If we want to drag our feet and do as little as possible. That would be to dual?stack the front end device only, and run everything else on v4. At that point, we don't have to do the translation, that loses the information about the, who the end user is, there is no state that needs to be kept outside of the server which already keeps the state. And, yeah, it's kind of a better quality approach for the entire service.

However, as Geoff pointed out, dual?stack is hard. It's complex. And I know that my colleagues that are not network people, they are server people and application people, they don't like dual?stack. If they come to me and ask me for a prefix for their new server installation and I give them, here is your v4 prefix, here is your v6 prefix. Nine out of ten times they will never use the v6 prefix for anything. Because there are fire wall rules that needs to be duplicated in v6. You need to do monitoring on v6 and v4. You need to troubleshoot v6 and v4. Everything needs to be duplicated, and it's a lot of complexity and it's complexity that we don't want to have and especially not the people that don't really care about networking as such.

But if the next logical step was to dual?stack parts of the servers, then you might think about dual?stacking the entire service, which is what the IETF recommended, but if you have complexity dual?stacking parts of the service, then dual?stacking the entire thing is an immense amount of complexity and immense amount of things that can go wrong. So ?? and we have still not done anything about our dependence on IPv4 addresses. So, this is not a complete solution either for us.

What we then can do, we can start single?stacking the non?publicly available parts of the service, and just single?stack them in IPv6 instead of IPv4. That works too. But you still have the dual?stack and the complexity related to dual?stack on the public front end of the service. So, we are not very happy about this either, although I think that if I actually gave my colleagues a v6 prefix and I saw a small v4 prefix that it could only cover their public front ends, they would use the v6 prefix and the v4 prefix, but they wouldn't be happy about this.

But if you do this one step further, do IPv6 only on the entire service and application stack. Now we can translate the v4 clients outside of everything. And that is starting to get ?? look more interesting to us because at that point we have single stack everywhere, where the server guys ever log in, and that reduces complexity. Now we only need to care about one single protocol. Simple. And the v4 translation happens outside of the entire stack. And another nice thing about it is that we only need to translate the individual service addresses, instead of assigning a prefix that maybe has some overhead in the amount of addresses that gets assigned, we don't need to route those addresses around in our network, and so on. So, this is actually starting to approach the complexity levels of single?stack v4. And when you are translating from the v4 Internet, it's a much simpler task than translating from the v6 Internet because the v4 Internet fits snugly into a tiny little v6 prefix so you can do it statelessly, per packet. So the translators doesn't need to see the packets going in both directions. In fact, if you had a flow containing 100 packets, 50 in each direction, you can actually run them across 100 different translators with no shared state, and it will still work.

So, at that point, we are almost done with all transitioning, but I would like to remind everybody about where we actually want to go, when everything is said and done and that will be to have only v6 in some far future, but at least that's where we want to go. So, when thinking about the deploying v6, I think it's important to not forget actually where you are going and where you want to go. Because then you can actually make some intelligent decisions about how you are going to get there.

So, to sum up: if you are dragging your feet and doing as little as you can every time you must, you are doing maybe not so much every time, but you are doing things many times. And in data?centre environments, you really don't want to make changes to something that works.

So, which one of these are actually possible today? Almost all of them. Except the IPv6 only one. Because if you look ?? if we look into our own stats, we can also see in Google stats, there is a fraction of a percent of end users on the Internet today that you can access on IPv6 website. At least using native v6, perhaps you had those tunnel things but we really don't want to go there.

So, what we want to do is just take a short?cut and skip all the intermediate steps and just go straight to the ?? as far as we can possibly go. So ?? and that's basically a summary of what I have been talking about. We just don't want the complexity related to the dual?stack. If we can avoid it, we want to avoid it. It's too difficult to do properly, especially when you are not a network guy, which is most of my colleagues. And also, we need to conserve the IPv4 addresses, and the only way we can conserve the IPv4 addresses is to stop using them for things that are they are not essential for.

So, that's a bit about why I want to do IPv6 only in the data centre, and why we decided on the translation approach. And I am going to talk a little bit about exactly the technology that we are going to use for it.

And it's a published RFC, it's implemented in several production level images from several different vendors, and it's called stateless ?? what it does is it takes the entire IPv4 address space of 32 bits, it maps it into a prefix in IPv6. So you have a 1 to 1 mapping between every possible IPv4 address and into its own IPv6 address. So you can see, if you have an address called 0.0.0 in IPv4, you have some sort of a prefix and then 0.0.0.0 in the same bits, so you can actually write them as an IPv4 address at the end there if you really want to. And all it does is to translate the IP header heads and the ICMP headers.

So what you have to do to make this work is that, on the server, when you decide on which of your IPv4 addresses this IPv6?only server is going to be available to, let's say here, 198.51.0.10, that's the public IPv4 address you'll put into DNS in the A record for the service. So you generate an IPv4?translatable address that includes that IPv4 address and you configure it on the server so it responds to it and it binds to the socket for that address. And this is, from the server's point of view and from the IPv6 network's point of view, this is just a regular IPv6 address; there is nothing special about it really. So, you have to route this through the server, and, typically, this address would not be the primary address of the server, that would probably be one from some /64, or something, so it would be a secondary address, and you need to route this IPv6 address probably using its primary v6 address as the next hop. But other than that, it's just a normal IPv6 address.

So on a client connect, it looks up in DNS. Dub?dub?dub dot whatever dot org. It gets back the IPv4 address. It tries to connect to it. There is just a normal IPv4 address as well that happens to be routed to some sort of translator. The translator might not necessarily be a separate box; it could be your ?? implemented in your boarder router, for that matter, or anywhere else, you just route the packets there somehow using OSPF or whatever you prefer. From the client's point of view, this is just normal IPv4 transaction. And the packet ends up at the translator, which then translates the IP header and adds a pre?defined 96 bits to both the source and the destination address of the IP packet and then routes is back into the IPv6 network. And like I said, this is just a regular IPv6 packet now. There is nothing special about it, so it just follows completely normal routing to the server. And everything inside of the IP packet is unmodified. So the TCP, except for the [] checks /UPL, which is recalculated, ASTTP and everything inside of it is completely unmodified. And the server's software sees this as the regular IPv6 connection coming from this address. And the good thing about this, which you don't get if you are translating the other way is that these 32 bits, which is valuable for content operators, to know who is the client which sort of localised geolocated ads which I serve to these clients, these bits are there still. So you can still on the server geolocate the client. But the server's software doesn't need to know anything about it; it just responds to it. Completely normal IPv6, or indistinguishable from a regular IPv6 connection from a native client.

So, it responds normally. Nothing special about it. And the prefix that is being used for translation, that's where the IPv4 Internet is mapped into, is 2001 db8, it's also routed back to a translator again, using completely regular routing methods, nothing special here at all. It's just a 96 bits prefix. It doesn't even have to be in your own ASN if you don't want it to be. It could probably buy this as a service, maybe.

I guess everybody can guess what happens next. The translator strips those 96 bits, turns it into an IPv4 header and routes it back to the end user. So, again, end user sees, oh, this is an IPv4 connection, completely regular, nothing special about it. The server sees and IPv6 connection, nothing something special about it except that if you teach it that this particular prefix represents an IPv4 translated client, so you can geolocate or do specific logging or whatever you want. And as you can see, very intentional, two different translators. There is no shared state in between these two translators, so, there is the SYN, SYN?ACK, ACK, could just have well gone through a third one, and if a translator fails, there is no state that is lost, there is no connection that is dropped as a result of this. This is would be exactly the same as if a router failed and there is a regular IP rerouting event, it would like as. And you could also, if you want, for performance reasons, put several of them next to each other and use equal cost multi?balance to load across them, if you want to. So there is absolutely no state that's a very, very good thing for a performance and for reliability.

So, I think this is a very, very simple approach, the technology is really, really simple. And since it's stateless, its much easier work with than a stateful NAT. You don't have to care about, know, the amount of connections per second. A SYN flood is irrelevant here as long as the to the will /AL bandwidth and the packets per second is within what the router can handle anyway, which is usually line wait, if you are buying reasonably solid gear. And there is no loss of information about the IPv4 clients. And all my colleagues that really don't like dual?stack, don't have to think about dual?stack. They have to learn IPv6, but then again they will have to learn IPv6 anyway so I might as well force them with it.

Of course, this is translation. So it won't work with all sorts of protocols. I can easily imagine that SIIT for instance, and F, it P won't work too well because they carry around the IP addresses inside the application pay load and that won't work. So things that generally don't work against NAT 44 will probably not work through stateless ICMP translation either. However, as it happens, most of the Internet is ACT P anyway, except for BitTorrent, which we don't see, so ?? and all of our customers, not all of them but most of them are running some sort of http based service, and this will work wonderfully with http, because http already works through IPv4 NATs.

So, other than that, the servers and the server software needs to understand IPv6 and IPv6 sockets and we are mostly working with open source software like Apache and whatever, and that supports it well and that is done so for a long time, so there is no problems there really.

And the only thing that I see is an unnecessary complexity in this solution is that the service needs to be configured because IPv4 translatable addresses, because those addresses needs to be routed separate from the primary address of the servers, which usually would mean a static route on the immediate next up router which needs to be imported into your IPv6 IGP, and the system administrators must actually configure it on the servers and if they are using some sort of fancy H A fail over mechanism, then those addresses need to fail over with the primary ones, and you need to make sure that the fire wall rules for the primary service addresses is also in place for the translatable service address.

But, if you have a wonder that implements a small extension to the address mapping algorithm, which some do, not all of them, you can, instead of configuring all of these static routes and extra addresses on the servers, you can add a static mapping into the translator itself and say that okay, IPv4 address so and so maps to IPv6 address so and so. So then the work will flow will be for my colleagues to say, okay, we have a new public service listening on IPv6 address this, can you please give us an IPv4 front end for it? And I'll just add that mapping to the routers. And I have been testing this with the Cisco ASR 1000 because I got to borrow one from Cisco for this purpose and they are saying it's on the road map for that one, but of course I wouldn't recommend people to buy based on futures. So there are also other vendors that have it today, this static mapping.

So, I have some bonus slides, which I could skip, if necessary. There are two things about M P U that you need to be aware of if you start to do this. One is that the IPv6 header is larger than the IPv4 header. So if you take, if a full sized 15 hundred byte IPv4 packet comes in from an end user on the Internet, it won't fit into a 1,500 byte large IPv6 packet. So, there is two ways of avoiding this. One is to have a larger MTU running in your data centre. Another is to it use TCP, because if the server software sees that okay, I am listening on a socket that has a 1,500 byte MTU, I am going to advertise a TCP maximum segment size which matches what the overhead in IPv6 header would incur. So, the maximum packet you'll see from the end user then is actually 1480 bites which exactly fits into a 1,500 byte IPv6 packet.

There are, also, the requirement for, if you are not using TCP with this trick, for the translator to do path MTU discovery, to participate in that and send out ICMP v4 need to fragment. And that's something to be aware of as well. But, I am not sure if we are going to actually bother with increasing the MTU in your data centre, because like I said, most of our traffic is http anyway, which runs on TCP, and that seems to work.

So ?? and the other issue is that an IPv4 link can have a very low MTU. Lower than is the minimum allowed in IPv6. So, what happens if the server sends a big packet to the v4 destination and it comes back from some v4 router saying need to frag, the MTU on this link is 100 bites, the server is not then required to respond to that by actually lowering its packet size to 100 because in IPv6, this is a 1280 byte is the lowest it is supposed to accept. However, it is supposed to add a fragment header, so that the DF flag in the resulting IPv4 packet is cleared. So the IPv4 router can actually fragment it. So, this, I found out, is a bit buggy in Linux at the moment, so, for our test systems, this doesn't work completely well with IPv4 clients between various small MTU paths yet. However, we don't really see that very often anyway, or actually haven't seen that at all, but it's a theoretical problem that you should be aware of.

The other one is the actual config required on the Cisco ASR to do it. I am not going to go through it but if you want to try it and you have a Cisco ASR, you can download the slides and look through it. However, I just want to point out that this is the entire config, and the blue lines are not related to a translation at all. That's just basic IP and ?? IPv4 and IPv6 connectivity. So, the only configure you need to set this up or I needed to set this up and this is running in front of our own corporate website today, is 7 lines of config. And so it's really, really simple, and if you get the static mapping, then you would get another line of config for each mapping. So...

That's basically it. So thanks for listening. Hope you found it interesting, and, like I said, I am running this now in front of my own corporate web page, or our corporate website and we have had no reported issues as a result of it. So that's not a very high traffic website, I can assure you that I would have heard if there was some problems from our marketing staff. They will have gone banana, but so far so good and it's been running for several months now, so no problems. So that's it. Thank you.

(Applause)

SANDER STEFFAN: I see Geoff walking to the microphone.

GEOFF HUSTON: I actually have a really quick question about this last bit about MTU and you said you are running it now. Our experience in APNIC has been that we have deliberately knocked the MTU down in our dual?stack servers down to 1280 to try and avoid most of these issues. Your slides suggested that if you have your data centre running at 1520, you'll also avoid it. So my question is: in your ?? we are running it now on our corporate web server, what's its MTU?

SPEAKER: It's 1,500.

GEOFF HUSTON: So you are running right up 1,500?

TORE ANDERSON: That we did on purpose, deliberately ran it at 1,500 in order to actually, you know, bring any problems related to MTU out in the open so we can actually tackle them instead of sweeping them under the rug so to speak. So what I am saying you can increase the MTU in your data centre, then I'm talking about all the links going from the translator to the server. I would not increase the MTU on the server. It should receive those over?sized packets, but it should not necessarily send them for the translator to not run into problems with translating a big IPv4 packet to a too?big IPv6 packet.

GEOFF HUSTON: Thank you.

JAN ZORZ: Can you go back to the network picture with ?? where you have IPv6 with a small NAT thing? This one. Okay. We had this discussion many times. I think you were not very clear where you actually save the public IPv4 addresses, because you need to map them on the IPv4?only site to the end site. So you must give to every server one public address, is that correct? So where exactly you use last IPv4 addresses?

TORE ANDERSON: In this drawing, this hypothetical customer has five devices, two web servers, a file server, a database server and a load balancer. Rather than some space for expansion, that will be a /28 on this LAN; however, the customer just has a website, dub?dub?dub whatever, which is living on the load balancer device here. So if all of these servers communicate internally using v6 only, the only IPv4 address we are going to be using for external communication is the one that would have been assigned to this interface right here on the load balancer. So, for this customer, we would likely go from using 16 IPv4 addresses to using only one. And, of course, like a large customer, the larger they are the more service there are. However, the amount of servers doesn't necessarily have to do anything with the amount of A records published for the site. So if you go and check Google, they have like, I think, three A records, or maybe ten, or something like that, but I can assure you that they have a lot more servers. And it would be the same for us as well. So, the customer, the largest customer so far, which have like 300 servers and a /23, which 512 addresses, they will probably get by by maybe 10 public IPv4 addresses if they could remove them from all the servers.

BENEDIKT STOCKEBRAND: Two questions: The first one is how does the setup using SIIT compared to running an application level proxy like taking a standard PC, whatever runs on, it if we talk, it http, especially because doing things in hardware tends to be slightly more expensive especially if you buy from send vendors.

TORE ANDERSON: Like I would agree if you are buying a router specifically for this purpose. However, if you are doing squid or either a TCP proxy or http proxy, then you are keeping state. And we have a large, we have many individual customers, and what we certainly don't want is to have one proxy or one stateful device which sits in front of several customers, so that if one customer gets hit by a SYN flood it knocks out the access to all of the other customers as well. So we really don't want shared state in the network. So ?? and also, I find that, or I believe that the most logical location for the translation service is on the boarder routers, so, when the traffic comes in from the IPv4 Internet to your network, it immediately translated to IPv6 and pass it on. And a router would likely be able to do this in line rate, because it's per packet, which means that you are not adding any tram marines or anything like that and you have the same capacity available as you would need to have anyway for your regular routing. So ??

AUDIENCE SPEAKER: Basically the idea is if I like, like, whatever, 100,000K for one or two of those translation machines and for the same money could get, what, two dozen standard PC?based machines to do the job in software, doing things in software is ?? tends to be slightly more flexible and tends to be cheaper in a number of ways. Not necessarily in your scenario, but it's an option in some other cases as well. That's why I have been asking.

But I have got another question. Have you ever tried what happens if you flood an SIIT gateway with serious ICMP traffic like basically pings, or whatever, because if they implement the regular packet translation in hardware but doing ICMP translation in software, you might be pretty vulnerable to the level of service tech just using flood pinging for example.

TORE ANDERSON: I have not tried that, that would be implementation specific, as you pointed out. The routers we use in our network, which I hope will actually get support for doing this, cannot do anything in software like that. It doesn't touch the transit traffic in the software at all. So, if ?? it needs to all happen in hardware, like, IP routing, or either or. It's the same, you know, with the earlier routers that could do v4 in hardware but v6 in software. You just don't want to go there, if it's like that you don't deploy.

AUDIENCE SPEAKER: That's what I thought, that might be a problem you could have run into, if you had some information about. Thank you.

CHRIS BUCKRIDGE: We have got a Jabber comment. It's from [] Lassa Ligard from IBM, and he says: we have been running the MTU MSS adjust and clear DFF trick for about ten years between DC islands over GRE tunnels. We have had no issue on this account.

SANDER STEFFAN: So, basically, in this setup, would you prefer to have as much of the traffic as native v6, obviously?

TORE ANDERSON: Obviously. Compared to where you would translate from v4, it's just the expectation that the traffic across the translation device would diminish over time, and, at some point, when maybe we are almost done with the IPv6 translation, we might decide just keeping this running would ?? it's not worth the cost, and then we can turn it off completely. But to begin with, we would actually have most of the traffic going through the translation, yes.

SANDER STEFFAN: So if we combine this with Geoff's presentation, then actually the clients will select which protocol they will use based on the round?trip time and the latency.

GEOFF HUSTON: If you are running Mac OS, which we all are.

SANDER STEFFAN: Depending on operation system, browser, etc.
So, what's the latency difference between v4 and v6 in this setup?

TORE ANDERSON: Well, that depends on where the translation service is located. If it happens in the same chips as is forwarding the untranslated flows as well, then I would expect there to be no distinguishable difference. However, if we have a translation device in Oslo only, for instance, if our Swedish one is down and the traffic comes in from a Swedish v4 client, goes to Oslo to be translated, goes back to Sweden to reach a web server in Sweden and the same way going back, then... but, as I said, I would actually want to have it on all my boarder routers, all my egress and ingress points into my network because then I can ensure that there is no traffic trampolines and the latency should be the same.

SANDER STEFFAN: But that does mean that the level of v4 traffic stays higher than it has to be, basically, for a longer time, as long as implementations like Mac OS use this metric.

TORE ANDERSON: I course, of could, stick the translator behind some DSL line if I wanted to to increase the latency of v4, but I don't think my customers would agree with that, so you probably want to be doing as good as possible service on v4 and v6 at the same time.

CHAIR: So we could add the latency configuration option into a NAT so you can create it. So, thank you. If there are no other questions.

Tore, I think this is really a good experiment, and I hear you are running your web server over this setup already, right. If you plan to add more higher traffic generators in this, can you please document what happened with this, because I think this would be really, really good output for the community to decide if this is the way of of doing it in production environment.

TORE ANDERSON: There are, already, some, I think, Chinese research networks, they are doing something like this, they are calling it IEVI for some reason, but it's the same approach, and they are saying that they have some websites and there are several hundred megabits of traffic to them from across the translators and they are saying no problem, it's fine. And it should be fine, because it's stateless, so it's just we are back at forwarding in translation, almost like just regular IP routing, if it's done in hardware in the same fast path as IP routing, it should be as fast really.

CHAIR: It's the word "should" that bothers me a bit. So, thank you.

(Applause)

And we have reached the coffee break, so we declare this session closed. Thank you for attending and welcome back in half an hour.

(Coffee break)