Archives

These are unedited transcripts and may contain errors.

EIX Working Group session on Wednesday, 18 April, 2012, 11 a.m. to 12:30 p.m.:

FEARGHAS McKAY: Good morning, everybody. I'd like to welcome you all to EIX at RIPE 64 in Ljubljana. We have got quite a big number of talks today, most Lyon the theme of interconnection so we will get going. First of all some housekeeping; we have got scribes from the NCC, Suzanne and Amanda, if you are watching on?line Suzanne is our Jabber scribe and will relay questions.

Does anybody have anything they would like to add to the agenda? No. OK. A couple of housekeeping things: If you are going to the RIPE NCC general meeting this afternoon, you need to register now, because there won't be time after the NCC services. So, please make sure you are registered even if you have to go out now.

And there is the possibility of a tour to the local Internet Exchange. If people are interested in that, would ?? when they do their presentation at the end of the meeting and we are thinking probably going after the sessions on Thursday evening, so about 5:30, and it's about 20 minutes away by bus.

On to Remco, he is going to talk about the cost of interconnecting in different places and ways.

REMCO VAN MOOK: Good morning guns again. I think this is a new record for me doing three presentations in a row, but there you go. So this morning, I am going to be talking about interconnection markets to you, just briefly this was an initially half an hour presentation, I have got 15 minutes, I am going to put it on Kurtis mode if necessary. What works and what doesn't? A quick introduction, what is Equinix, the company who pays my pay cheques. This is not supposed to be the marketing slide but I am making it one, five contents, 12 countries, 99 data centres and a whole load of Internet Exchanges. About me, I have ended newspaper university once where I built a network for university students, then I ended up doing intranet stuff for music festivals, then I started up an Internet Exchange, a hosting company, I started coming to these meetings here, then Equinix bought my company and they liked my logo so much that they rebranded theirs and I have done a couple of IETF, a couple of RFCs and on the RIPE NCC executive Board.

Right. I thought it was fun to start with, the price of bandwidth in bulk per meg, I am sure you are aware of that sort of entity, as people in this room. These are prices from early to mid?last year, I couldn't be arsed to do new ones but they haven't really changed much. How does that start? Well, a cross?connect in the side of a data centre, that is about one cent per meg. Internet Exchange traffic, the reason why we are all here this morning, is about 25 cents, keeping in mind that both sides of that connection pay, so it's actually 50. Backbone traffic in concern Europe is about 50 cents, trance at lack tin traffic and sales about a dollar, Internet transit wholesale, two, although there is people where you can buy it for 90 cents is it Andy?

ANDY DAVIDSON: Yes.

REMCO VAN MOOK: There you go. Internet transit in retail, there is some unlucky sods still paying 15. Then as a broadband consumer if you read your acceptable use policy you will find that is about 50 dollars. National Ethernet services, about 180, 3G mobile data, anyone want to have a guess? 11, 400. And it gets funnier after this. GSM voice calls? Just under half a million. It gets better. 3G mobile data roaming in other countries that your carrier has nice agreements with, just over 800,000. And we proceed to other roaming, this is all based on European carriers so if you are an American you might be even unluckier than this, just over 3 million. It gets better, still: What you will find out is GSM voice calls while you are roaming are even more expensive so it always pays off to use Skype instead of roaming voice calls. I like this one. SMS text messages that you pay for by the 140 byte increase, on your own domestic network, those are a cool 210 million, and anyone want to have a guess what this is? Here we go, it's almost 1.2 billion dollars per meg per month, and what is important to realise is that in order to be able to sell this, what you need to buy is some of that and maybe some of this and that and one or two of these, maybe. And you don't need to be a financial genius to see there is probably some money to be made there.

By the way, the carrier pigeon, after careful consideration with Fergus, ends up at about 2,500 dollars per meg per month, these tend to be expensive and their life time is quit limited, a low number of round trips.

So I am talking about successes of interconnection markets, let the buyer be wear, because one man's success is another man's failure, if you set out to conquer the world you only conquer one country, sort of success but not what you set out to do, it's better to be relevant somewhere than nowhere, interconnection markets and building them from scratch is hard, it takes a lot of time and resources and a lot of cooperation between a lot of parties.

So, what do I think is a healthy interconnection market? A healthy interconnection market has sufficient critical mass to be self perpetuating; in other words, the number of parties in that market and the volume of traffic exchanged in that market helps perpetuate that market so we will draw new parties and keep traffic going up and to the right. So you have at least regional significance and it minimis the duplication of effort which means that everybody involved doesn't have to build extra network resource that is they normally wouldn't need. So why do networks do this kind of thing, and expand their footprint? These are the three basic reasons I could come up with, if you have a fourth or fifth reason I would be very keen to hear it. Either you want to increase revenue and reduce cost or improve quality. And the interesting thing is suddenly within larger companies these three decisions are made by different people for different reasons, and they are in different parts of the organisation; sometimes they are not even in the same country. So, where do we have working interconnection markets on a global scale:

Singapore, Silicone Valley Washington, New York, Amsterdam, Moscow, even, surprising for some and more limited scale Stockholm, Prague, Budapest, Zurich, Madrid, I was doing this presentation in Dublin before I included Dublin out of courtesy. But really, if you look at where is it successful, this is a pie chart that is a breakdown of Internet traffic of EURO?IX members and the aggregates of traffic was something like 8 terabits of traffic and the breakdown of that is quite interesting. So, of all of that traffic, this is DE?CIX, AMS?IX, links, Moscow and then you go to Stockholm, Warsaw, Prague, it goes along, and the other 120 make up this part. So really, where are they successful and how do you quantify that.

So take Frankfurt, for example, is Frankfurt a key interconnection market and where does that apply? So here is Frankfurt. I put a couple of pins in places where there is lots of networks and lots of data centres. So where is that interconnection density? It's not everywhere in Frankfurt, actually it's only in two places. Amsterdam. Amsterdam has a pile of data centres and a pile of locations and if you look at the interconnection it's only in two locations.

So why is it only there and why did it come that way? It's a combination of density, cost, which is closely related to density. You need to have regulatory framework, you need to have availability and scalability of your options. And you are talking about a consolidated effort. In other words people have to cooperate with each other. So in that sense, you are talking about a perfect storm.

So density, well, I am going to skip through this for the sake of time. You can probably understand why density is relevant.

Cost. This is an important one, also for nonfinancial geniuses, it has to make sense. So, I mean, ask yourself the question: Why do more people buy connectivity from a large city to a small one than the other way around, if you are in a large city there is not much sense in going to a small one. Total cost of ownership wins all the time.

Regulatory framework, what does that mean? Do you regulate telecoms, or there is some countries out there that don't have that yet, quite few, and that is not just about laws; that is also about arbitrary rules that you set for interconnecting your networks. I am sure you recognise a few of these. There was some Internet Exchanges at some point in time that had a rule that only ISPs were only allowed to connect to them, I am not going to name any names. There is other Internet Exchanges that say, well, you can connect to our Internet Exchange but at the centre we require you to sort of interconnection with everybody connected, or more blatantly, some rather sizable exchange in Germany had the rule for a while we are just doing this to screw up some big telco, and again I am not going to name any names.

So, scalability. It's no good if it doesn't work, it needs to be predictable and sustainable, so you need to be able to keep that base in your network in place for a while. You need to be able to grow it and if you think that you can skimp on any of these things, retrofitting any of this later is very expensive and always Morris key. So you need space, power, local connectivity, eyeball providers, national and international a carriers and all that have together comes to single interconnection zone. This is not about technology much, I am sorry, and it's also not even about drinking beer much, which is quite disappointing, there is no magic built and it's just a pile of hard work. With that, I thank you for your time.

(Applause)

FEARGHAS McKAY: Thank you, Remco. Any questions? Feedback? Requests for more pricing points? No. Thanks again, Remco. Next up is Arnold. Just a reminder for those who came in late, if you haven't registered for the GM this afternoon, you should do it soon because if do you it after NCC services there won't be time. Arnold is going to talk about interconnecting IPs.

ARNOLD NIPPER: Thank you very much for really interesting presentation, Remco. And I guess I would ?? it's better. Thank you very much, Remco, for this interesting presentation, and I guess you will see off a lot of what Remco also pointed out in my presentation.

So I am, my title is interconnection IXPs and that is what Remco was talking about, where all the networks meet, the pros and cons. And my agenda is first I do a little bit of the motivation and then define what an IXP is and I give examples of interconnection, I also touch re?sellable programmes is for Internet Exchange point and sum up the presentation.

Where did this come from: So re?seller programmes are in the last ten months, 12 months or so, hype at Internet Exchange points. The Internet Exchange point are approached by customers, by their current customers and by other parties to do these things, and then we had a panel discussion at the last ?? in the EURO?IX forum in October in Lyon, where we were discussing interconnecting IXPs and in general it's just to get started discussion on better understanding what an IXP is and what an IXP should not be.

And before talking about interconnecting IXPs, I guess it's really we have to know what really an IXP is, especially in contrast to a carrier or to an IXP. So from a technical point of view the definition of an IXP is just a layer 2 or a distributed layer 2 infrastructure under single administrative and technical control. And the purpose is to facilitate settlement free peering between IXPs, so where IXPs really means an organisation in the sense which is providing Internet services, so years ago it was quite clear what an Internet service provider is. Today, it's not really that clear. For example, at DE?CIX we have a simple road, a customer has to ?? be able to run BGP and that is all.

And last but not least, the intention of an IXP was really to increase quality and to save costs, just to recap until I guess the early '90s, the Internet was hierarchical and packets between two networks had to travel across the NS F backbone and only with BGP it was able to have any topology.

And next is when we started to build IXPs, it was a simple switch. It was only one switch. And then, the next thing you think, OK, what happens if the switch breaks, then all interconnection between my customers are broken and then you start to have ?? to build in resill yen see, and then you put another switch in another building nearby and then you start to spread out, and the real interesting question is, the diameter of an IXP, how far in base might an IXP grow. And I guess the more natural definition might be if you just look at the surrounding or at the city where the IXP is, and then you come up, OK, an IXP may be bound to where the city is and then you come up with die a.m. terse which are, of course, bound to where the IXP is, for example, London it might be 45 kilometres whereas in Moscow out of 40, much larger 50 and Frankfurt 20. So this is really simple approach. There might be other approaches, suggest also to take into account the cost of fibres, so why cost of fibres. If the IXP spreads too wide out, it has to interconnect these switches which make up the IXPs, and by that, he might take away business from one of its customers, from the carrier customer, which in other cases would hook up the customers from far away. So that might be, even beside fibre costs, also other matrix which might up the notion what the diameter of an IXP is, and unless we really do not understand how far an IXP grow, we are not really able to see when I am interconnecting IXPs.

The next is to talk about the critical mass which Remco also mentioned. One is you can easily calculate the value and I have given the ?? how you do it; you say the next back alternative for peering is to buy transit, and then you just take the cost of the IXP and then you see how much do I save when doing trance ?? when doing peering and then you compare the cost. And of course, the value ?? cost ratio has to be greater than one to save money.

Gravity means not only the amount of customers but you also have to look at traffic, for example. And what I have also done, to put into the relation, the amount of traffic to the number of customers, and then you also get some figures like here, which give an indication of, I will not say important but different between some IXPs.

Of course, each IXP tries to increase the critical mass to attract more customers, and as that, there is not a single matrix to measure how attractive an IXP is, it's not just the A customer ?? the number of customers, not only the amount of traffic, but what also comes in is which networks I am able to meet, and that also does not necessarily mean the number of prefixes; some network might be interested to get network from, let's say, Eastern Europe, others are from Asia and so on. So it's not sufficient just to talk about the number of prefixes, but also which type of prefixes and which ?? perhaps also which res I will yen see, how often you see a prefix over there, might be important.

But we might say the better the gravity, the more likely it is that you attract more customers, and I always like the example to compare an IXP with a party. If you have the option to go to small party or to a big party, I guess most of you will perhaps go to the big party because they say hey, others are going there, then it must be good, I am also going there. And the next ?? this is ?? this definitely holds true, the chance to meet interesting people is, of course, much higher if you have to go to a big party, you see.

So, this is defining the diameter of an IXP, the gravity of an IXP, how an IXP wants to be attractive. Next, we come to how technically interconnect IXPs. So, the simplest approach is if I have an IXP over here with a switch fabric and another IXP over there, then I simply run an connection in between, connect these two switch fabrics and then it's just one bigger IXP. So when then customer on IXP 1 are able to peer with all customers at IXP 2. And if these IXPs are on the different administrative control, then of course it's near a clear demarcation who is responsible for what, and this is, I guess, really the most critical and most important part. So really, we have seen this in the past because an IXP really forms a big local area network and if something goes wrong, it does not only impact the party who has done something wrong but it might affect everyone.

So, if you look close into it, there are a couple of other things you can do not only to interconnect the switch fabric but if you want to do it in a more sophisticated way, you can use, for example, VPLS and that gives you the possibility to hook up let's say a ten or 100 gigabit port and to have multiple customers on this single port, and the big advantage is, you may run in a big pipe and you do not have to single out each customer on a different port.

The more ?? or another solution is, is more classical; you just also come in with a big pipe but what you do, you single out each VLAN that means you loop it through your ?? once through your switch fabric and use two ports to really have your customer hook up on your switch fabric.

That does not need any fancy MPLS, VPLS stuff; you can do it with any kind of switch.

So, now we have seen the diameter of an IXP and, also, the technical possibilities of interconnecting IXPs and now I would like to show you examples of interconnections, and the first interconnection between IXP is between Lyon IX and top IX which is based in Turin in Italy. Both are compared to, for example, AMS?IX or links, really small Internet Exchanges, and they are both also in ?? don't get me wrong ?? less developed regions, telecommunication?wise, and therefore, they decided to interconnect these two IXPs to make it more attractive for both customers on their side to connect to this IXP. And what they simply have done, they run a gigabit or meanwhile 10 gig bit line in between and each customer in TOPIX is able to connect to customers in Lyon IX as well. And the fourth bullet is not really too ?? you do not have to pay extra money if you want to peer with the other parties at the other exchange.

But by doing this, so both IXPs take the cost of running a gigabit interconnection in between and then everyone must ?? might also peer with the other party on the other exchange.

The next one is inter LAN and balance can IX, InterLAN is in Bucharest and BalkanIX in Sofia and they have, I would call it ?? between topics and Lyon IX it was more or less a par par situation. Here, it is that BalkanIX is buying ?? is paying for the 10 gig bit connection to Bucharest and it's also putting a switch at the InterLAN site in Bucharest.

In turn, that BalkanIX is taking all the cost for the switch and for the interconnect, the BalkanIX customers do not have to pay for inter LAN services. So if you look at traffic at inter LAN, at BalkanIX, InterLAN is by far bigger than BalkanIX but BalkanIX of course makes the ?? makes this Internet Exchange more attractive but hooking up to InterLAN as well.

Next is also an interesting example, this is in interconnection between France IX and SFINX, these are both French networks based in Paris, and they have the lucky situation that they they are in the same co?loication in Paris and what they agreed is that customers are allowed to connect to the other exchange free of charge but only for certain bandwidths, that it's up to, I do not know, let's say, 50 megabit, peak, 100 megabit, I do not know. And as soon as customer needs more bandwidth on the other exchange, you have to have or have to be a regular customer on the other exchange.

This also gives value for both customers, for SFINX customers as well as France IX customers because you say, OK, I am more related to France IX, for example, but I also want to reach SFINX customers which are not yet on France IX, by having this interconnects I am able to do so. And if you really have more traffic to some customer at the other exchange, you certainly are also willing to pay more.

Now, back to re?seller programmes, so what have they to do with interconnections. So re?seller programmes are more or less you try to build an outreach for your IXPs, and if you want to do so, the first target, perhaps, might be other IXPs, because at other IXPs you already find potential customers who know what peering is, they already have peering services, and therefore, it might be easy just to put in a re?seller programme, talk to another IXP and say, OK, let's work together and see how we can sell services or cross?sell services to each other.

The question is, if, for example, a smaller IXPs want to run this customer or re?seller programme and connects to a bigger exchange, if this is really a win?win situation. I am not really sure. Of course, the ?? our customers from the small IXP might be interested to connect to the bigger IXP but not the other way around so I guess Remco also pointed out why do we not see so much connection from big Internet Exchange or a big interconnection point to a less dense interconnection points.

To sum up is what you will really ?? really shouldn't do is don't leave your home curve, so I have talked about the diameter of an IXP and as soon as you really leave your diameter, for example, DE?CIX does not only run an Internet Exchange in Frankfurt, but we also have Internet Exchanges in Hamburg and Dusseldorf and Munich, what we could try to could is make up a network and interconnect all these exchange points. But doing that we would not be any longer an IXP but would turn to be a carrier, you see? And just to sum up, is re?seller partner programme does not really mean to interconnect IXPs but to use synergies, and interconnecting on the other hand, smaller IXPs can make sense to gain more critical mass, and just to recap and to make sure the P in IXP stands for point and that does not necessarily mean that you run a single switch but that you are really limited to certain region and do not spread over country or or try to sell services worldwide. That is it.

FEARGHAS McKAY: Thank you, Arnold. Any questions?

HARALD MICHL: I have a question and I like the last slide because you mentioned a P which was one thing I wanted to mention. You mentioned that you want to stay a point and you don't want to interconnect, for example, Hamburg and Frankfurt because you get a carrier, but what is the difference between connecting Hamburg and Frankfurt compared to other Internet Exchange points. I mean it's two exchange points and which two you connect?

ARNOLD NIPPER: So, it's a good question, so the difference is that you typically have someone or third party then involved which takes care for the interconnection, you see? And perhaps you may also have the choice to have different carriers which take traffic from one Internet Exchange to the other. The most thing is, by interconnecting smaller IXPs to bigger IXPs that you have a reselling partner relationship. For example, that the smaller exchange is able for all the billing, for hooking up, for supporting and so on and so on. But you typically often do is you leave the choice to say, you might take carrier one, carrier two, carrier three, for interconnecting from the smaller to the bigger one.

AUDIENCE SPEAKER: Suzanne has a Jabber question.

AUDIENCE SPEAKER: I have a question from maxim on chat. And he says I have a question about page 11. The presenter mention the that interconnect can be made via VPLS or VLAN technologies. What RFC or standard does describe VPLS interconnections? he says he has a question about page 11, the presenter mention that had interconnect can be made via VPLS or VLAN technologies. What RFC or standard does describe...

ARNOLD NIPPER: There is no RFC.

AUDIENCE SPEAKER: David Freedman, I have been trying to respond to this. What is actually being asked is that there seems to be some confusion about the use of the term VPLS. I didn't think or I didn't assume that you meant in this presentation that people would do any kind of VPLS signalling between their networks. I assumed meant that there are some VPLS instances that are interconnected and that the end users of the exchange would just see a VLAN or a port as they normally would is that correct?

ARNOLD NIPPER: Yes.

DAVID FREEDMAN: I think that is the point.

KURT LINDQVIST: Just to explain, the VPLS is used for VLAN tag rewrites for the re?seller, it's not the signalling on the wire towards the customer. Some exchanges uses VPLS internally in the network but not against the customer. The customer is only used for VLAN rewrites.

FEARGHAS McKAY: Thanks, Kurtis. Any more questions? Thank you very much, Arnold.

(Applause)

FEARGHAS McKAY: Next up is Kurtis from Netnod, he is going to talk to us about lessons that we have learned over the last 20?odd years.

KURTIS LINDQVIST: I work for Netnod. And I am going to skip through the first sort of ten slides very quickly, partly because Remco tried to emulate me and wanted to get the true feeling of me presenting, those slides are more there for background and if you saw this history of the peering in Europe I made in RIPE in Dubai it's actually the same story but a very condensed version.

So, the history of the peering in Europe is divided into three phases, the academic years where we first started seeing Internet emerging in Europe and saw this first interconnect. That is less relevant for what I am going to talk about. The more interesting part is number two, what was it that drove the interconnects in Europe and the establishment of all these exchange points. And again, the first academic years there wasn't very much structure in why the peering emerged. It basically happened as necessity arose. But in the earlier commercial days, what happened was you have got the first peering policies being created, the first commercial servers offerings and we start peering being used as a service differentiator in the commercial marketplace between the operators who provided the first services.

And the reason we started seeing a lot of this was back in that day, most of the traffic that is being handled in Europe was actually being exchanged through the use of carriers who most of the time were US based, excuse the phrase, who was basically taking a lot of transit revenue out of this market, if you saw Remco's slide on the cost of market, bandwidth pricing, if that would have been back in 1995 I think you would have added at least 10 or maybe 20 fold to those prices, and what happened was that a lot of the carriers or the smaller emerging RS Ps that came with the deregulation of the telecommunication market, didn't want to pay the old incumbents or the carriers for exchanging the traffic and we started seeing exchange of traffic locally to avoid those charges but lines that you had to pay to go from one country to the other in Europe if you wanted to go to other exchange points.

We are still seeing a fairly low bandwidth so delay wasn't so much an issue for peering but more a way of saving cost and getting that revenue to invest into your own network and services, and that drove a lot of these establishment of the IXPs in Europe. And peering in general.

When I started working in this industry, most of the traffic we generated in Europe went to the US because that is where all the content was, but over time with this more denser interconnects we saw a lot more content also being developed in Europe, and that made the traffic shows ?? traffic flows gradually shift to being no longer directed towards the US but to stay in country or in language and cultural regions where the content became available.

And you could argue that the availability of these interconnects also enabled more of the content to be useful and used in the regions.

If you look at a bit more graphical way, it went from this, 80% of all traffic in Europe were sent to the US, and this is data and I know before you laugh, yes I left the UK and Ireland out, because they didn't have statistics for them. This is, I ?? this is very a long time ago, ten years ago, we made this study of what traffic went to and the two numbers, the first number represents how much traffic stayed on the ring structures we had, we had one Scandinavian ring and one central European ring and southern ring so that was sort of in region and the second was how much of the traffic stayed in that country. And you can see this is in 2001 we made this study. This is changing from 70 to 80% to the US. And we went to the exchange we have today where wave lot of traffic if not most traffic staying in Europe and in very few high dense points where we see a lot of concentration of networks and of traffic.

And the wisdom of this as Arnold said before, that the more parties you have that peer, the more people will join because there is some value of being big and some value globally that this is good.

If you go back to this, what happened in 2000, why build out all these exchange points in the late '90s. Well, we wanted to keep the traffic local because it helped the content develop and it helped us not to hand ?? to send the traffic over those expensive circuits and rings all around Europe. And why it's kind of hard to prove I think that that original strategy we used of not sending traffic around Europe and back to the US, I think there was something to that strategy; I think it actually helped. Transit prices today are very low, they are dirt low, and it's always discussion of quite often a lot of conferences you go to will the cost of transit kill peering? It hasn't so far and this discussion has been going on for I think as long as I have been in this industry.

So, if you keep all this in mind and I am going to look into my /KR*EUSical ball here and make a prediction; I am not quite sure ?? or sorry, I think we are about to repeat history. I am going to take a random example country, and I could take any country in Europe, I just happened to pick this one because I happen to know how to find statistics for it and statistics is always good to prove a point.

So the Swedish regulator, who is actually here somewhere, they make great job every six months, they force all the licensed operators, including us, to give them statistics. Now, why they force us to do, I don't quite understand, but anyway. They had this extreme good wealth of data you can look at. And we can see this is the number of subscribers for various speeds here. We can see for fixed line. And we can see we can quite a big of growth in the 100 megabit arena here and we have, well, quite a few thousand subscribers in quite higher speeds. And this is downstream capacity for end user and you can see the high speed downstream is growing quite quickly.

Sweden has 7.4 million Internet subscribers if you include mobile broadband and I am going to do that and I will come back to why. If we assume we have 7.4 million subscribers and we look at if each of those users have an average constant load of traffic of either 100 meg, 250, 50 and so on, we get the following gigabit graph ?? or sorry, peak load. And I went and talked to some of the eyeball providers, and they claim that today we are somewhere here, somewhere between 500 and 750 meg is what they see as peak, per second. So half a meg is 0.5. And that is what they see as a general principle. The end of the thing that seems to be fairly even, no matter what the downstream access bandwidth is. You could make the theory that because of bottlenecks elsewhere in the network, that it might be, OK.

The other thing that the Swedish regulator do is they publish the market share data of these numbers. So you can actually take the previous graph and you can break it down to the percentage of market share for these ISPs, if you go to the data source you can read who they are, but that is not important. And they can see they all have this much total bandwidth generated by the eyeball users for different speeds in this segment. This is going to be a very hypothetical example but it's still holds true, though, and it's very bad example for the point I am trying to prove but we will come to that. Arbor in 2010 said that Google is between eight and 12 percent of the Internet traffic. Now, if we assume that that holds true for the total traffic and is evenly distributed and we know it's not because Google does exactly what I am going to try and argue for, they have local caches and peering etc., it's a good number, a big peer, roughly 12% of all traffic. If we break that down, we see around the number, the largest providers are probably sending around 250, 300 gigabit of traffic to a single peer, that is quite a bit of traffic for a single peer. These six ISPs keep in mind they are the dominant players and probably send is as much traffic between each other, the fact is they have some of this very large peers with a lot of traffic.

And so, is this a problem with this, having this much traffic per peer? Well, the new argument is that we got 100 gig copping so if you ask Greg, just go buy it. We have so many peering points, we have three at least, 3, 4 and 5 big ones so it doesn't really matter or all these people have so much transit so it's not an issue. The counter?argument is yes, there is 100 gig but do you really want to have your entire Google traffic on a single port? No, probably not. When you start buying them, they are quite expensive, and again, you don't want to have too much shared faith in your network, do you? The other problem is while you can peer this traffic away you are going to have to ago grate it and back fall it to where you are peering and if this is five or six points in Europe that Arnold was showing on his list there, it's still quite a bit of traffic you have to aggregate and back fall. Do you want to send 250 gig, it's still adds up, though. And also, if you try to back hall this all around Europe your end users will kill you for the latency when you can't ?? do I believe there is something else in my crystal ball that might be the way of the future? I think so.

Let's take another random example. And I am not going to, this is not meant to be the Netnod marketing slide. Because ?? when Netnod was started in '97 it was already from the beginning said issued multiple peering points in country to distribute the traffic and make sure we kept traffic local, both for res I will yen see so if you lose one of these exchange points we still can handle the traffic in country without too much effect but also to avoid having to back fall a big expensive back fall throughout the country.

These are the real numbers of what we see in the country. Do you think there is a problem with this picture? Now, this is not the entire truth because of course, all the Swedish providers also do a lot of private peering, which I have no insight into, but I have a suspicion that private peering looks very much like this picture. You can criticise me afterwards but I have a vague suspicion this is true.

The reason for this is, of course, it's basically only eyeball providers peering in the country or distributed, and we have content, all the content is just located in Stockholm. And we have also much more foreign operators located in Stockholm which helps to ago grate this imbalance and foreign providers exchanging traffic with foreign providers. Is history repeating itself? Maybe. I think the CDNs and content delivery providers are actually already learned their lesson of the late '90s, they are going out to all the exchange points in Europe and putting out all this peering and keeping the traffic local. And I believe that they might be just be a little bit ahead of the curve and the rest of you we might have to see the same problem if you have a very large network but that is still on a very national or spread out scale. It says nothing about what do you in country because Europe does have a few large countries, and we still have the most densely peered region in the world, if you look at the graph we can see that we actually have a lot of IXPs, and but the point still is that there is very few countries where you see multiple exchange points, and in those exchange points you still had the problem of getting the content pushed out to all the local exchange points that you can keep the traffic local and I think my personal belief is to make the network scale and handle the redundancy in the future, we will have to make this a lot more distributed and a lot more locally peered, not just the content but also eyeball to eyeball and the other exchanges and I really hope we will see a lot more local peering done in Europe and I know there is a lot of effort going on around Europe to start local exchanges and to more exchanges, which I think is a good thing to do and I hope that all the operators are following on this, and the content providers, to build out on this. Th last, I said there was 7.4 million Internet subscribers in Sweden, and I said that was including mobile broadband. And that might, of corks be what saves you all because mobile broadband has less bandwidth available than your 100 meg than you saw growing in the fixed line but at least from Sweden, this has already surpassed the number of subscribers for a fixed line. The interesting thing is they are using a lot less average bandwidth in the mobile data than they are doing on a fixed line and I think Remco gave the answer why, it's pure cost, and that might be what is going to save the network, I don't know. It might break the radio spectrum but it might save the back fall. That was all I had. Any question you are all agree, you are all going to peer locally? That is good then, thank you.

(Applause)

FEARGHAS McKAY: Thank you, Kurtis. We have got some extra space for updates if people have updates at the end they want to give. Next up is Martin.

MARTIN LEVY: Good morning. This is the go big or go home presentation, topic of discussion is those nasty horrible 15 byte packets that live on the Internet, and they live in exchange points as well. So, what I am going to talk about today is a B CP that is in the IETF queue, grow, seemed like the best place to put it, it turned out the only place it could fit, and what we were trying to do was to put together, in a document, what some people are already doing today on Internet Exchanges, and that is moving more than 1,500 bytes around. So let's have a look at the problem and see if it really is something to focus on.

Here is a classic trace path, so forget trace route for a second; swap it out with trace path. Trace path, if everybody has got a Mcin front of them and type trace path it will fail because it isn't on it. It gives you the ability to look at your MTU size of a packet running around a network. In this case you are seeing California through the UK and to a website in the UK. In this case, the Royal Family's website in the UK. There is nothing wrong with this, it gross through an Internet Exchange as it should between one service provider and another and it provides perfectly adequate path and no one has ever complained about it. But the reality is, it doesn't have to be like that, because there is a class of customer, a class of traffic that actually could benefit from a larger path and we know that larger path exists within side back bones but not so much inside Internet Exchanges. So this is the premise that I am going to show you some examples and I will hit some of the specifics. Basically, we have an end?to?end 1,500 byte world at the moment. Now, for broadband or mobile users that actually, that is as good as it gets. But we are starting to see traffic change, we are starting to see large amounts of data centre to centre traffic, traffic that is going through storage systems, through to either web to type apps or talk about cloud and cloud storage in that, and there is no reason why those services need to sit on only one backbone; they quite possibly sit on multiple back bones both from diversity of provider, multi?homed reasons and the like. That means an Internet Exchange, a PNI or something is involved. So I am focusing purely on this one area of PNIs.

Let's hit some of the issues. First of all, there is MAT. Or MATs plural, if you are British and I am both. Forget this complicated formula, let's go do something nice and simple. From a simplistic point of view, there is still MAT but there is less of it. What we are going to do is through up the simplified formula and that is really has a very simple components. It has a constant. Now, that constant is built out of stuff but for the purpose of this discussion and to keep things nice and short, it's just a constant. We have an MSS and this is the key point, the maximum segment size that a TCP connection has between one end and the other, which is a function of the maximum transmission unit, the MTU, that is where that physical interface size comes into play. Now, we have to subtract a little bit of header, we have to subtract header and in this case I am only going to talk about the IP and TCP header size of things, not anything else, for v4 it's a certain size and for v6 it's another size. And for those of you taking count that will the only time I use the word v6 in this presentation.

Then we have a round trip time and it's a deviser and that round trip time is the killer. The further we are away for the number of bits we have in flight, we end up with a lower throughput. It's not about bandwidth, you can put gigs upon gigs upon gigs of bandwidth between two sites; if you have a higher and higher round trip time, without changing anything else in the formula, you will end up with lower throughput. Now, you also have packet loss. We are in an EIX Working Group and we know there is no packet loss whatsoever with inside Internet Exchanges, and therefore, we can ignore that part of the formula.

Technically this is one minus packet loss but I simplified the formula a little too much. Why do we have these big parts? We have these because we now sit in a global environment where there is nothing strange about grabbing a whole bunch of data out of one place in the world and processing it somewhere else. It just is the norm. We didn't think about this N research networks this has been well understood but it's happening in the commercial world. These example paths and latencies here are all based upon sort of basic real word mindset, Chicago to Frankfurt, you are talking about trading and financial world. San Francisco to Stockholm, that is some of the examples I am going to show, Hong Kong to Amsterdam via a fast path through Russia and chain in a, Singapore to wherever because it seems to be commonplace where you are seeing cloud providers and social networks dumping their stuff and we are putting and more data that needs to be in these multiple locations. In the CDN world this would be completely littered with dots.

So, when we look at a simple calculation, again to show the deviser effect, if we change the MMS on a Hong Kong, Amsterdam path we can look at not adjusting anything else, throughput that can double, triple in size so it turns out this packet size is important.

So I wrote this BCP, stuffed it in and managed to get the grow group to accept it and it's ready for the next set of he had its, what it says is this: Focus on the Internet Exchanges, turn around and provide a mechanism to increase the interconnect between back bones. Turns out it's not that easy. Tended up being 23 pages. For those people that, and I would love it if those people want to read it, provide me feedback, I'd happy ?? it's about to get the next edit in. Where Internet Exchanges circuits we all know this, what we are dealing with here is Internet Exchanges sitting right in the middle, that is the only focus. It's the path that is 1,500 bytes so it does not matter because you may be sitting there thinking I run a backbone and we are already running 9,000, 9128 or 4470 not 1,500 bytes on those multi?10 gig links around the world or 100 gig links. We are actually already doing this. So the backbones in most cases have already been doing their bit. The focus is only on Internet Exchanges. So here is an example of an Internet Exchange that implements Jumbo Frames, the ability to move large packets. This is Netnod in Stockholm, they run a dual VLANNed network one VLAN running standard 1,500 bytes and one running for historical reasons 4470. Let's look at the success you get with a trace path, start in California, scream around the world, end up in Stockholm, PMTU changes to 4470, comes from FDDI way back when. We won't go into a history lesson, but look what happens, we go a HOP or two into the network and hit web server, and our PMTU immediately drops town to 1,500 because that is the interface, the ethernet interface but my focus is on the exchange. So this example shows the focus but still doesn't provide any results. So let's work ?? look at a different path. So same path, screaming over to Stockholm here from California and we go to Peter lot of berg's server, a little bit of a pedantic character at best, runs 4470 and large MTU on his servers and probably over to his mum's house and gives me a perfect example, maybe not commercial example, but a perfect example of a path that benefits two back bones set with large MTU of whatever size, but intermediary we end up with a large MTU capability on the Internet Exchange. So, keep this short and sweet, let's talk about the things we already know. We know that things like Internet Exchange points are vital to the ?? to the health of the Internet. We know that some of these exchanges and the two examples I have here is Netnod in Sweden, the NASA AIX in California, in mountain view, which both run Jumbo Frames and both use the same technique. They actually both can and do move bits around but there is no guarantee that anything goes end?to?end.

This is a known problem and people have solved it but there was no documentation. And what I learned was, just saying, just do it was not that easy. So, the draft is written with a lot of input from various exchanges and also from research networks which turn out to have far more Internet ?? interconnect experience. So let's look at a couple of the important points. Forcing the MTU over a layer two means that everybody has to be consistent. There is no layer two protocol when there is inconsistency at the MTU layer, the only thing that happens is packets are silently dropped and debuging that on a shared fabric is not easy. The draft basically proposes an MTU value, a fixed value of 9,000 bytes. It picks a number. That number is important and I will talk about it in a second. It talks about different ways of implementing it. You can implement this by having let's say a flag day and telling every single user on an Internet Exchange you are going to change on this date. You could do that. It talks about being able to do it as a VLAN addition. It talks about thousand do it by just buying a second set of hardware. That is probably not cost?effective but it's a proposal. But it also explains what happens if you get this wrong and it's what gets it wrong are all those cases means you have to focus on this. This is not a as easy as you think to begin with. And it talks about how BGP has some advantages in this space. And layer 2, if you want to screw up, view up there because it's way harder to debug than anything else.

So why 9,000? It turns out the research community learned this one a long time ago and found lots of different hardware had lots of different numbers, there was ?? the numbers I don't remember, 9170, 74, 9192, 16, all these numbers exist for from various vendors. There is no standard with inside the IEEE and there does not need to be one and it is silent on what this large number could be. If you are a hardware vendor and want to build something with 10,000 or 16,000 or whatever, go for it, feel free, but in the ethernet world there is no standard. So, the research community started at these numbers and researched all their hardware and said, OK, let's pretend we have tagging or maybe have queue and queue and/or MPLS or multiple layers and they started doing the MAT and started trying to get the right number and they went up from the bottom up, IP, TCP or what is the biggest ?? we are going to move data as 8 K because computer like nice round power two of numbers so they worked upwards. And it turned out what it didn't matter what number, no one could remember it and no one could even agree on it. And halfway between the two was this magic round number, 9,000. So you can read all these great sort of papers and documents about this but people turn around and said let's pick a number we can remember because we know it's big you have no move every packet we need and it will fit on about every piece of hardware out there, even if you end up with eight or nine layers of MPLS tunneling between two spots do not do that many layers. The point is you end up with 9,000, the references are in the draft and it handles about everything that you need.

Second part about why 9,000. There are data to data applications that truly will benefit for data replication over CDNs, for data replication in cloud storage, for cloud storage recovery from one data centre to another or from an enterprise should they get a large MTU connection but Internet Exchanges could potentially be in the way. Focus on one issue, on one issue only, and that is why this got written.

Now, the one last part about this was, it turns out BGP sessions work a little bit more efficiently when you have a large MTU between source and destination. Why? Because they are TCP based and turns out you can more inside a single packet, reduce interrupt and processing time. Did a study on Netnod because it was the one that had the most number of sessions and sure enough we saw a bunch of session that is truly did come up with large MTU 4430, 4420, 4410, sort of variants on setup. Some that didn't, a lot of debugging still to do in that. But the point is you can see cause and effect.

Finish up, basically, we are looking at trying to reduce optimise traffic between back bones that ?? going over IXs, we know that Internet Exchanges are key to this. We are not trying to solve an end?to?end baud band type issue but this has value and value in the IX community. That is about as quick as I can talk. Thank you very much.

(Applause)

FEARGHAS McKAY: Thank you, Martin. We have some questions. Dave.

DAVID FREEDMAN: I would like to adjust this microphone. Martin, I just wanted to illustrate that jumbo MTU isn't the magic bullet that it's made out to be. As you pack more and more things into larger TCP segments into larger IP packets when there is loss you suffer a lot more. You just don't suffer from the lost data but also CPU overhead of having to regenerate and debuffer that data and in BGP that could be quite catastrophic. If you are relying on large LRI pack updates between BGP peers and you lose a heavily packed segment you are going to suffer a lot more trying to reconverge that. It's not suitable for all cases. Second point that I want to raise quickly is I am not quite understanding, perhaps I didn't understand from your presentation what the implications are if a member on the exchange doesn't support any greater than 1,500 bytes and what happens to them, are they isolated from the exchange, what should we do.

MARTIN LEVY: Perfect two questions. The first one is, just bigger packets and the potential of losing more data; we have been there. Go look at the issues of moving from ATM to frame relay and to Poz, Poz to where we are at and now ten and now 100 gig. We have absolutely over time increased the underlying frame size of our communication mechanism. We have also changed dramatically the processing, whether it be at the ASIC level or CRC level. Now that is a problem for TCP and there are two references I think in the draft about checks on validity over larger and larger frames. 9,000 is sort of OK. 64 kill byte actually breaks the CRC mindset completely. So this is covered. There are some references and feel free to go try and find them and read them. The whole packet loss issue is covered quite heavily in that.

Even today, the same argument could be said for a packet of, what was the magic number, 372, 576, to 1,500. You have the same multiplier effect. We have something that says bigger packets will give us the potential to lose more on a single action. The reality is that we are going this way, we are going faster, we can look at the growth of networking. This is ?? the response is simply it's inevitable it's going to happen but let me hit the number two point because it's far more important. Again, in the draft, it talks about various ways of doing it. One of the ways it says it does not work is just to tell your users one day hey you should go and do 9,000, give it a try because that will absolutely fail. If certain vendor equipment and certain members on an exchange are still configured at 1,500 you ship them a 9,000 byte packet it will sigh recently dropped, incremented inside and a session will just disappear end of story. So you can't just turn around and say everybody should go to 9,000 because somebody won't read that and won't do it. Secondly, not everybody can go to 9,000 because you are not ?? you cannot assume that everybody has jumbo frame capable hardware. So you read through the proposal, you will see that that is why it suggests that multiple VLANs and multiple physical whatever the various techniques are, so that you separate and end up, and yes, you end up with twice ?? potentially twice as many, this is how Netnod operates, twice as many BGP sessions one over 1,500 byte fabric or VLAN or whatever the number is, we know that operationally works. This has been proven ?? NASA has been proven at Netnod and maybe at other places with more inside private networks so we know it works and it covers how do you BGP optimisation to make sure you use the right next hop. You cannot have dispar at MTUs on an exchange and say next Tuesday we are going to change everything. You can't do that. Your feedback would be more than valuable.

Wolfgang: We do have a separate VLAN with a size of 9,000 bytes. We announce it had to and our technical mailing list but obviously none of our customer wants to connect. You are very welcome to be the first one.

MARTIN LEVY: I will be the first one but it takes two to tang go. Your mailing list will now be filled. Thank you for that information.

AUDIENCE SPEAKER: From remote participate pant, maxim again and he said who must my great on flag day, the IX or the IX and customers.

MARTIN LEVY: If you do a flag day both have to migrate and anyone who has done in the Internet Exchange world, incompetent migration or sub net mass migration will know you never, ever want to do that again, and flag day ?? if you are an Internet Exchange operator and you are thinking about doing something on a flag day, please don't. Flag days don't work. Flag days don't work. And there are only one of the options to solve this problem.

GERT DORING: I like Jumbo Frames, for different reasons than you do. I have customers that encapsulate stuff and stuff and stuff so the end user MTU tends to be small, if it has to go through a 1,500 byte thing in between. What I am ?? what I would like to complain about is your methods simplifying things too far, you are sort of claiming throughput is proportional to packet size, which it isn't. There is a TCP window in between and so on. So throughput goes up by packet size but not two times the packet size, two times the throughput. And that is what the slide says right now and I cannot leave it at that. Sorry.

MARTIN LEVY: Yes, so the caveat here is I completely oversimplified the formula in order to actually get through a two?minute math versus the paper is referenced that that comes from, and it is 10, 20 pages of good old fine print and anyone who wants to read it feel free, it's good bedtime reading, you will be asleep without the use of any additional chemicals.

Dave: He is absolutely right, it's quite complicated, there is the latency that plays a major part as well but there is the variations in latency and as you chunk more and more data across you risk adding serialisation ?? it becomes massively important. I would really like to see some results, some kind of data of some real world performance impact basically.

MARTIN LEVY: Okay. Read the draft, there are 20 references. I will make this statement: There is nothing new in what I am doing except for documenting this, and finding all the references. The jumbo framework that I started was back when Althion did jumbo frame hardware in '99 I think was their first piece of hardware that came out capable of doing this and we needed this for NS F reasons, but go look at the references, the work has been done. It's a little dated but it's scaled and the math is there and it deals exactly with the question that you have asked but it's a ?? to present all that stuff. I have to simplify this, just to get to the end of this in the time slot.

FEARGHAS McKAY: Thanks again, Martin.

(Applause)

FEARGHAS McKAY: Next up is Harald who is going to give us a quick update on the switching wish list and then we have got a quick update from EURO?IX and then we are into IXP updates.

HARALD MICHL: From Vienna Internet Exchange and I give you a short update about the Internet Exchange point switching wish list. This is a document that is published on the EIX Working Group web space and it describes the requirements of IXPs to have about their equipment that is necessary to run the IXP.

The recognised by end of last year we will need some new equipment for our Internet Exchange point but we run out of 10 gig ports and this document helped us at our last call for tender so the idea was to take this list again in the current version and use it as a reference for call for tender. Then it came out the current version is already the version we last used at the last call for tender so it's really quite old, it's nearly seven years old and this combined with call for volunteers from last RIPE meeting made a trigger in my brain and said if you have to adopt and add things to this document we could do this in the public document and republish a new version and so the community has benefit from this.

Then, I started at the beginning of this year to ask around who would help and participate and I got two second, two additional volunteers, one of them is Martin pals from AMS?IX and the former original responsible person of this list, Mike Hughes who used to work for LINX and we started to work on this document together.

So, what is new? Well, we adopted the content and added things we think is necessary to run exchange point at the current point in time. Very happy that Martin was able to add these things with regard to VPLS which I am not used to know so far and I have no experience, but with this team we had, we had experience from all different kinds of exchange points so I think it's a new document and I am really share it's better than the current version.

So, what did I do or what did we do? We build up this new document and I send a draft version of it to the EIX mailing list around two weeks ago and asked for comments or if there are some mistakes or something, and the intention is to wait until Friday to get some comments and then make a final version 4, so if you think there is something wrong in the document, please send some remarks to the EIX Working Group mailing list and please also if you think there is something missing or something is a bit changed or added then write also maybe to the EIX mailing list and we then think to add it within a new version. So we want to make now a stop for content for version 4.0.0, publish this version and then of course have to continue adopting the document according to the current needs of an exchange point, so there will never be a final version of this document, I guess, because there is always new things that required by exchange points especially one of the things is the name even from the document because it has been the switch wish list and currently there do exist some exchange point that don't use switches for running the servers, but routers.

If you don't know what to do after the RIPE dinner on Thursday, feel free to send a mail to the EIX Working Group mailing list and tell me about your ideas. So far, what has been since the publishing of this draft version, no comments on the mailing list. The EIX mailing list is a father quiet mailing list anyway, but then I was interested do any people read these documents or is it just uninteresting for a community? But there is a snapshot from this Monday, there have been more than 50 downloads of the documents from unique IP addresses and more than 20% have been over IPv6 so I think this is a success. And if there are no further comments, we intend to publish this ? current version 4.00 on the EIX and continue to progress with the document.

Any comments now?

FEARGHAS McKAY: No questions. OK. Thank you, Harald for that. Please do send any comments in for him.

(Applause)

FEARGHAS McKAY: Martin swallowed up our spare ten minutes so we are running a bit late, so if people want to talk in the IXP section could all come to the front so we can swap you over quickly, that would be really helpful.

BIJAL SANGHANI: Really quick update on what has been happening with EURO?IX. I don't have a graph but we will know what it looks like. There is continued steady growth in the IXPs in Europe. The IX federation, I want to quickly mention that there is AP IX in Asia and LAC IX in Latin America and the Caribbean, and the ?? we are one step closer to the EIX federation coming together. The MoU we sent out to all the boards and the members in January, and all the members have agreed and we will be hopefully signing the MoU to form the IX federation in the next month or so.

Something interesting and fun that we are doing with the RIPE NCC is we are ?? we have got two major sporting events this year, the euro football champions league or something and we have also got the Olympics in the UK so we are working with the RIPE NCC and a number of Internet exchanges have volunteered and we are going to be looking at the traffic from the Internet Exchanges during particular events. So for example, some of you may remember this, this is going back in 2010 when LONAP had this really big increase in traffic during the World Cup between England and Slovenia, so this is the kind of thing, and we are going to be blog being this on RIPE Labs, so it should be quite interesting to look at trends and stuff like that.

So, the other thing that I wanted to say was that the EURO?IX website, it's not just for IXPs. ISPs can use it, if you are a peering coordinator you will find a wealth of information on there. I am going to do a quick demo of the website, I am going to leave it to you guys to explore and investigate it more, but I just wanted to show you, so this is the EURO?IX website.net. We have a number of tools here on the right?hand side, and we also have this section here called for ISPs. So, what we have here is an IXP matrix, so as a peering coordinator or ISP you can look at this and see what the different Internet Exchanges are doing, you can get their AS number and find out whether they are doing IPv6 and we have got the Jumbo Frames column here as well, and what ports they offer so there is a lot of information in here that can be useful. We also have a list of IXPs and when you go into a particular IXP you can look at, get information about the IXP as well. And what we have added recently is actual, the peering LANs so the different IXPs have different LANs and if you are not sure or whatever, we can ?? you can have a look on the website and get details of the Internet Exchange peering LAN.

We also have, over here we have an ASN filter and this is also something that is really useful for peering coordinators so you can look at what peers are where, you can look at which peers are at a particular Internet Exchange and not at another Internet Exchange, and you know that can be useful when deciding which Internet Exchanges you want to actually join. And lastly, we have the ASN database, and over here, you have got the database summary is on the right?hand side but what you can do here is put in an AS number of and you can see where it peers.

FEARGHAS McKAY: Can we let people look at this later themselves.

SPEAKER: You can see where that peers, that is all useful information there. And that is it.

FEARGHAS McKAY: OK.

(Applause)

FEARGHAS McKAY: We have got, as far as I am aware we have got four IXPs wanting to present, is that correct? Hands up. I make it four. Amsterdam, LINX, JPNAP and then our local hosts.

ERIC NGHIA NGUYEN?DUY: There is no slide. I am working for AMS?IX and what I want to update is I want to do some update on the route of testing. So at the last EURO?IX meeting we report a route testing at AMS?IX we replicate what we have in production and we test it, we test three implementation, open BGPD and Cisco support route server at the moment. So my colleague at ?? represent extend detail about the test results so I won't repeat again. The result is BIRD has a faster and Cisco second and open and BGPD is painfully slow, it takes 45 minutes to convert. One we test BIRD we observe that it's, a lot of memory so the in the gigabit reason. So we think that OK, well, three gigabyte is not that bad, we can load, planting memory into server to cover that. The thing would be OK. But after that we did some with only 500 peers and around 128 thousand prefixes and the memory shoot up to 17 gigabyte as you can see over here. This is no scaleable. We want to Andrei what is really going on, the configuration scheme that we use is using a multiple table, so, you know, when you use multiple table for each has had a and copy of the RIP it multiply everything and use up a lot of memory. We tried out with the single master table and then the memory drop down and then the conversion time is further so unless you have a really good reason do not use multiple table with BIRD.

Just move on. Well, after I heard about the 9,000 byte MTU at LINX meeting last November I wondered whether it help the route server work because basically we can improve CPU usage a little bit with the 9,000 byte aye tried out and it worked, it's faster than the conversion time.

So, based on our test, we are going to face our open BGPD because it's painfully slow and we are going to have a new route server in the coming months, one Cisco, one IOS XE3.5 and one server run BIRD.

I am going to do a little bit of promotion here for My Friend Tomaz this is a really great tool to check out BGP, it's really lightweight and easy, test configuration, if you want to test your route reflector or route server you should check it out.

After the last presentation at EURO?IX, there is some community member that interested in working on with the test?bed again so form a small Working Group. Our objective is to share common knowledge and in deploying route server and do something to test the route server again. We found that BCG session, that is it for now.

FEARGHAS McKAY: We don't have time for questions really on these. So next up is Jen.

Jen: I will keep mine to 30 seconds. Jennifer from LINX, if nobody knows me. As already mentioned we do have a small event coming up in the summer called the Olympics, heads up to members here, we will be operating a change freeze up to and including the olympic and par Olympic Games from the 14th of July to 19th of September so if you have got any upgrades or port orders to do before that, get your orders to us before mid?June and we will get through this time. During freeze, no software or hardware changes. Our engineers are working around the clock to get some changes in before the freeze. If your members are CEOs, e?mails about some changes. New Juniper PTX switch in Telehouse north to replace our core site. So you might see some maintenances in and around that over the next couple of weeks. Our next membership meeting is on 21 and 22 of May in London. If you are interested in become ago member of LINX come and speak to me or my colleague Cheryl. Thanks.

(Applause)

FEARGHAS McKAY: We should have a slide for this.

SPEAKER: Thank you. My name is Tomoya Yoshida of JPNAP, I will introduce which we will see in our route server in Tokyo. As you may already know so we have three exchange point in Japan, two in Tokyo and one in Osaka. In all three network is independent. So the recently we set up server in Osaka in addition to the Tokyo and also currently the significant increase of the mobile traffic between the mobile carrier in the contents provider because in Japan many people shifting to the mobile phone and smart phone kind of things, now the traffic is reached over the 300 gig, and the next one is so this is the, our customers, BGP which we are seeing in JPNAP network so you can see the whole time of the BGP timer is 90% is 90 second. So this is after the negotiation, the case the right side is the packet base of the open BGP message. So that 27% set up the 10080 second to us and our Quagga set of the 90 second so after the negotiations the BGP session whole time is the 90% and 90 second but many people set up the 180 and also you can see that 240 second is, I believe, the default timer of the iBGP open message whole time.

And this is IPv6 case, so it's similar to IPv4 but not ?? we don't have a variety of the BGP whole time compared with the IPv4. And the lastly, this is the BGP capability. Unexpectedly, we see the around 60 percent, afraid ?? capability. And the gracefully start, so it's less than the 50%. And as you can see, the many of the BGP parameter is not used. So that is it. Thank you very much.

(Applause)



FEARGHAS McKAY: Any quick questions? And finally our local host. Other people interested in visiting the exchange tomorrow afternoon? Yes? No? If would you like to come down at the end of the meeting and see if we can organise something then.

SPEAKER: I represent the local exchange here in Ljubljana Slovenia in the Internet Exchange. It's quite small exchange only 20 members. What is new, we just celebrated 18 years of the exchange, so it came of age. That was one month ago and we are also introducing new service, route servers, this service now in the pilot phase and what we have done last year is renumbering of the whole exchange and we use this flag day approach. Fortunately, because we are quite a small exchange and we have done a good preparation, special meeting to prepare everything with all the members, everything went smoothly so that approach can be used provided it's under the control, environment and in a small environment. That is all. Thank you.

FEARGHAS McKAY: Thank you.

(Applause)

FEARGHAS McKAY: Any questions? OK. Well, I'd like to thank you all for coming, big thank you for all our speakers today, enjoy your lunch and we will see you in Amsterdam in September. Thanks.