Archives

These are unedited transcripts and may contain errors.


Plenary session on 16 April, 2012, at 2p.m.:

Nick: Good afternoon everyone. Please take your seats. We have a special guest arriving here momentarilily from the ministry, Dr. Turk. Protocol suggests that we are all seated, and once we are all seated he can enter the room. And once he is seated, we can begin the proceedings. So thank you for your patience.

(Slovenian protocol)

ROB BLOKZIJL: Good afternoon, friends and colleagues. This is the opening session of the 64th meeting of RIPE, and it's with great pleasure that I welcome the Minister of Telecommunications of the Republic of Slovenia as our first keynote speaker. Let me introduce minister Ziga Turk who is responsible for the information society in Slovenia, he is one of the pioneers in introducing the Internet in Slovenia and also he played a crucial role in developing the Internet and modern technologies at the international level. May I invite you, Mr. Minister.

Applause)

DR. ZIGA TURK: Ladies and gentlemen, it is, indeed a great pleasure for me to be able to welcome you here in Ljubljana in Slovenia. It's encouraging to see RIPE community important actors of the Internet industry gather together in Ljubljana. As a minister responsible for information society, I am very pleased to be able to address possibly most qualified expert for information and communication technology in the region. Because of the dramatic change we are facing in Internet technologies, this meeting is important for RIPE and it is also important for Slovenia.

First of all, I would like to thank RIPE NCC and RIPE community for good stewardship of Internet resources in our region. We are proud that you chose Slovenia for this very important meeting. It is a recognition, I believe, also of the local expertise and our contribution to the global community.

I would, therefore, also like to recognise the contribution of the local hosts, institute Go6, and the academic research network of Slovenia Arnes for the help to organise this event.

They say that one should start a speech with a joke. I try to Google a few about IPv6 and you probably heard them before but they are illustrative. They should be IPv6 jokes and the good thing about IPv6 jokes is that there are so, so many, many of them. Many more than IPv4 jokes.

(Applause)

But the bad thing is that nobody is using IPv6 jokes.

So, the Internet development was funded by the US Department of Defence, and when there was this debate whether the address block should be 32 or 128 bit long the army said that not even the US military would need as many as 4.3 billion devices. But things have changed since then; in just 20 years Internet became a vital tool for the development of the knowledge based societies and economies a driver for growth and innovation. It has revolutionised practically any aspect of human life and it is changing all aspects of society. Society, in fact, is glued together by information and communication technology, Internet is changing how we communicate, how we communicate within a family, in the office, in a society, in the media, also in politics.

Some elements of society change already, for some I believe including policy and democracy, the big change is yet to come.

Since its origin, openness and transparency contributed to the Internet's development. After the initial kick?off by the taxpayer, by the defence dollars, Internet is a success of a civil society, of the NGOs and of the vibrant economic environment. Let us keep it that way.

The whole world, and especially Europe, is confronted with numerous challenges with regard to its competitiveness, productivity and overall prosperity. We all agree the next generation Internet, infrastructure, digital services and digital single market will boost our job growth and also global competitiveness.

That is why we have to maintain funding levels of ICT, particularly the private ones. Governments should create a favourable business environment that encourages investment, that encourages innovation and creates an open but fair competition in the market.

Stable macroeconomic conditions with bearable deficits are a vital part of this framework, a vital part of a framework for any economy to flourish. While the EU member States are implementing austerity measures, it is important to keep investing in the future with all the benefits this will bring to society and the next generations.

However, this investment must be much better evaluated for its costs and for its returns. Nowhere but in IT, which is still hooked to Moore's Law does timing of an introduction of a new technology make such a huge financial difference. Investment must be smarter than in the business as usual, we are, in fact, let's face it, no cost was too high for this great thing of the future called the Internet. To make smart decision we will need all the expertise of you people in this room.

The digital agenda for Europe 2020 targets, the minimum broadband access for all at 30 megabits per second with at least 50% of households subscribing to speeds above 100 megabits per second provide a clear objective for the mass usage of the Internet. However, we must also continue to push the state?of?the?art, of a very high speed Internet and related services. There is a widespread consensus that IT play as crucial role in Europe's and in Slovenia's future. We are committed to invest in fast and very fast broadband networks and pan European digital services in the next period. We will highlight the following in particular:

Guaranteed quality broadband access for all citizens; introduction of IPv6 protocol; the role of IT support in sustainable development that is directed above all to increasing energy efficiency, link research education industrial partners in the national priority IT fields, encouraging the development of e?skills and combined and integrated services among content fields and levels of administration and implementing cross?border commerce setting up user?friendly solutions in various fields and finally digitalisation of the cultural heritage and public libraries,

We are eager to achieve and to exceed the objectives of the digital agenda for Europe, to avert economic downturn and to ensure our future.

Ladies and gentlemen, as this is the last RIPE meeting before world IPv6 launch in June, and possibly the last meeting before IPv4 address space will exhaust in the RIPE region, I would like to provide a government perspective on IPv6.

Like we are all running out of natural resources, like we are running out of money in the budgets, we are running out of the IPv4 addresses. We cannot print money, we cannot discover another continent with fertile land and oil or gas resources. But we can create an IPv6 address space. It is virtual but opens up opportunities, faster development of information society and higher quality of services for the end users.

We are establishing a stimulating environment for the introduction of IPv6 into the economy, public administration, academia and elsewhere.

Slovenian Internet providers research and educational community and content providers did a great work in the past few years. They put Slovenia on the very top of the IPv6 most developed countries. The new protocol does not resolve only the problem of the address space but inherently resolves also the transmission of realtime traffic and needs that appeared with mobile communications.

Last year, Slovenian mobile operators were the first to enable IPv6 on mobile networks as a regular commercial service. All important Internet providers have IPv6 enabled in their networks and any single research and educational organisation have access to IPv6 connectivity. Currently, nearly all of the biggest local content providers are also on IPv6, so we are looking forward to June the 6th when the last big Slovenian content provider will turn on IPv6 and with this action we are going to conclude the initial IPv6 introduction phase.

Success of IPv6 in Slovenia came through the cooperation between all key service and content providers, special thanks go to institute Go6, which has managed to achieve this cooperation, and I have a particular pleasure at acknowledging the presence of Jan Zorz which is the initiator and leader of the cooperation.

In the coming period, we will continue to work on general direction of the implementation of IPv6 and encouraging its application in the public administration.

Our position is that the best option for the development of the Internet based economy is in a complete transition to IPv6. We think IPv4 should, in time, be deprecated and put into historical status. That does not mean stop using it but not put any further efforts in researching, developing and building new solutions and innovations based on this legacy protocol.

The position is not easy. A few days my ministry published a reviewed version of the study on the transition to IPv6. The study translated into English and is intended for all those preparing for the transition to IPv6 in other countries. The study is published under the cc creative comments licence and freely available to all Internet community.

Ladies and gentlemen, on behalf of the government, I would like to conclude by once again expressing our sincere gratitude to the entire Internet community whom we owe the Internet and who is working on its development and its next generation. Internet did grow from the seeds of the government intervention but has grown and is being developed by non?governmental actors, civil society and business. We want it to remain that way, and in this, you will have our full support. I wish you a pleasant stay and rewarding hours of work here in Slovenia. Thank you.

(Applause)

ROB BLOKZIJL: Thank you for...

SPEAKER: Hello, my name is Jan Zorz, we are the local host here and I would really like to welcome you to Slovenia. I am really happy that (Jan Zorz) that we made it here, thanks to minister Dr. Ziga Turk to come here. We will also have ?? for the first time in my life, I have a help or note for this speech, so we also got the director of our telecom posts regulator, he will speak later and I am really happy that also the government is starting to come to these meetings.

So, a bit of commercials. Go6 platform, I had a presentation in Prague, it now evolved and got bigger. We are getting the international requests for the members from big telecom, European telecom groups, and we have the exhibition desk on this side, where you can learn about who our members are and what we are doing, how did we put the cooperation together to make all this IPv6 stuff work. So, you are all invited to our exhibition area and we also welcome in new members from international space.

We also running the IPv6 experiment over 3G. As you probably know, we got IPv6 deployed as free mobile operators here in Slovenia and one of them SIM mobile offered here 100 SIM cards with IPv6 pre provisioned. Some of you managed to get the cards. They run out in a few milliseconds, so those of you who don't need the cards any more, please find somebody that needs the card and will use and test IPv6. V6 traffic is for free for this time and I would like to thank SIM mobile for this great experiment.

What else? Thank you again for coming here and I would like to say the title or the motto of this meeting, IPv6 is in the air. Thank you.

(Applause)

SPEAKER: Thank you, Jan. Being the 20th anniversary we would play a little trick on you and this is what Rob looked like 20 years ago. I am going to take over from this point. Thanks again Jan for that. Our next guest is Marko Bonac of Arnes.

MARKO BONAC: Welcome in the name of the academic and research network of Slovenia, Arnes. We are pleased to host the RIPE community for the first time here in Ljubljana. Plenary session this afternoon is devoted to celebrating 20 years of the RIPE NCC, in fact also Arnes research and celebrates this year 20 years of serving its customers. In addition, we have been an N ran and around Slovenian Internet Exchange top level domain SI and manages ?? so we are deeply involved in Internet coordination development and security here in Slovenia.

One of our activities, together with Go6 and LTFE, during last year was also promotion of IP version 6, so meetings SIM seminars and articles and we are proud to see Slovenia very high in the international comparisons of IPv6 adoption.

The weather is not the best, the prediction for this week is also not much better. So, I would recommend you to take part as in the social event on Tuesday evening, visiting the cave. It is magnificent, independent to weather condition on the surface. I hope that everything will go well this week. Our engineers work hard to ensure the best connectivity. I wish you a successful and interesting week. Thanks.

(Applause) Serge: Thank you Marco. Getting these meetings together takes a lot of effort not only from the side of the RIPE NCC but also the hosts and on this occasion this is why we have three hosts. Let me welcome our third host from LTFE, Andrej Kos.

ANDREJ KOS: Distinguished guests, friends, colleagues, ladies and gentlemen, on behalf of liberty for telecommunications and the faculty of electrical engineering at University of Ljubljana, I would like to extend warm welcome to all of you.

It is of great honour and pleasure to be able to greet you at this RIPE meeting, the 20th could hosted by the ? for telecommunications and in Ljubljana. As the host of this meeting, we are very happy to become the Internet capital of Ljubljana, the Internet capital for the next five days. It's probably no coincidence that we have gathered in Slovenia today. Slovenian companies and institutions here over many years proven to be an important link in the train of technology development in the field of ICT. We have established an array of test centres, comprehensive training programmes and networks, connecting companies and universities, scientific institutions and government bodies. Slovenia belongs among few countries in the world that facilitate generic telecommunication development, and are capable of de/SRAOEUPBG, producing and exporting telecommunication systems, services and solutions. It has a strategic geographic location at the crossroads of four macro regions, two major European transport corridors and highly demanding media market. It makes Slovenia an eye a deal place for nationwide test beds of systems and services such as IPTV in the past and IP version 6 lately. However all of our strengths and potentials can only be succeeded when we are integrated into cooperation with international initiatives and organisations as well as with excellent foreign researchers and scientists, some of who have gathered in Ljubljana today.

We are looking forward to new challenges and tasks we will solve together. Let me conclude by thanking all of you who have gathered to this meeting, the presenters, the organisers and the participants. We wish you all a successful event and a pleasurable stay in Slovenia. Thank you.

(Applause)



Serge: Now, we have one more speaker in our opening plenary. Our next keynote speaker is Franc Dolenc. Franc currently holds the position as the director of the Slovenian regulatory authority ? APEC ? that's the post and electronic agency for the Republic of Solvenia. Mr. Dolenc startedd his career as a research engineer in Iskotel in 1979 which is the main Slovenian provider for telecommunications equipment for fixed and mobile telephony, next generation networks and network manage. Later on chief executive for systems planning, followed up as a deputy director, from 1993 until 2008 he was technical director for research and development, and in 2008 founded his own research company, their main systems being interactive solutions. Information technology society advisory group which advises European Commission on overall strategy for ICT.

FRANC DOLENC: Thank you, very much for this opportunity to be with you here. After few such a good speeches as we heard them in last minutes it's not easy really to tell much new but maybe I would like to share you few thoughts from my work in Istak and few thoughts from what I learn in the last six months being in the egg latery agency of Slovenia.

So the messages which we developed in last year in our advisory group in Istak were we must take special care in Europe to bring ICT into the mainstream of development not just of ICT but also all other domains that, we must work hard to find partners in the regions like medicine, education, environment, to help them to envision the new generation services which would simplify and strengthen the development of society so to come from the technological development into the balanced societal change development. So we are all looking forward to make sure that the ICT people are becoming much more involved in the domains of other areas.

And I think this is a great challenge because this is dramatically changing the way how technology development has been going from now. Till now, we were doing the technological push and from now we must wait for other domains to pull out something out of our work.

As a person in Slovenia National Regulatory Authority, we are now confronting with the similar change. We are really finishing the period of time, maybe last ten years, when all our focus was going into preparing the national operators to talk to each other, to open up competition, to develop new services, to make sure that monopolististic environment which was ruling the telecommunications in whole European in last 50 years has been opening up for new initiatives and business ideas and new services ideas.

Now, this period is developing fine, but we are confronted, and not just in Slovenia, all around Europe, with the lack of new investment opportunity. There is a real pressure coming from the question: Who is going to pay for the next generation networks? We are talking about quite a lot of money. European estimation is €270 billion. Slovenia would be roughly €1 billion which we need to bring one to every household in Slovenia. It's completely new network except some copper and fibre and maybe some underground ducts almost nothing out of existing networks would remain in the network which would fulfil such a high level of benefits and quality of service requirements.

On the other side, bringing long?term evolution based mobile Internet which is equally important as a fixed one, may be is not one billion but a few 100 million as well. So how to approach the next generation build?up. We are now facing completely new challenges. We have to find a new coalitions of investment and governmental interest, we have to find interest on the end user site so it's really the beginning of the highway area of the information society development. Only under such development of the highways, which for sure will be based on IPv6, then we can start talking about development of the next generation of services.

When we talk about next generation of services I would like to bring your attention to three simple words which we have been using in Istak, what kind of services and solutions do we need: We need smart, sustainable and inclusive. So the services which would really fulfil the needs of societial change of reducing the cost and increasing the satisfaction of life of the whole people, whole community of Europe and have a special care about inclusion, it means we must make sure that our services will be equally well accepted for the most talented but also the most unprivileged. I am absolutely sure the foundation which your expert team is putting into the next generation architecture is the best possible guarantee that the future services which hopefully we build up in the next eight to ten years, will be really something as dependable as the society of Europe really needs. I wish you very good work, a beautiful Slovenian weather, hopefully a little bit of sun, and hope to see you soon in next opportunity.

(Applause.)

SPEAKER: Thank you very much. I think that concludes our, the starting of this part of the plenary session. Do I have any slides, anyone know? I do. While they are loading up I should add I think the last time I looked at the attendee lists about 434 attendees, so again the numbers are doing well and will probably rise through the week. We also had 56 Slovenians registered, which I think is a really good turn out for the Slovenians considering the population of the country. In fact from the numbers that I heard back at, it's actually more than the total Slovenians that went to the last 16 RIPE meetings combined, turned up at this meeting. So well done, guys, and thanks for proving it's worth taking these meetings to different regions. It's up to 440, it goes up every five minutes. We also have a lot of first time attendees, I guess a lot of those Slovenians make up those numbers but there is quite a few others. If you do see people walking around with a little green sticker on their badge, it means they are a first timer so you may want to introduce yourselves and help them out a little bit.

I guess Verilan are helping us out with wi?fi this time around, seeing how they operate a system and we are interested to think what you think of the wi?fi this week. I think most of you look like you are already on the network so I don't need to go into this too much.

That is it.

We do have a few sponsors, I guess we had those going up in the background a couple of minutes ago. There they are. Of course, we ask you guys for registration fee but that only goes up to making up part of the costs and the social events don't get used by NCC money or those registration fees. They get paid by the sponsors so you can thank all those sponsors for contributing to all the socialising that we will be doing throughout the week. So there they are, the RIPE 64 sponsors.

(Applause)

So I am quite new at this. Rob, have I left anything out? I guess a couple of other things before I get off the stage. One is, we do appreciate interactivity. We do want you guys to ask questions and you will notice that there is microphones spread around the room. Please just for the benefit of the other people in the room and for those that are watching on the webcast, please clearly state your name and any affiliation that you'd like to be known or you happen to be wearing ?? I know a few of you work for different affiliations and so on, so please make that clear. I am just about to hand over to Todd. As I think Rob may have explained earlier, since the last couple of meetings we have now got a dedicated plenary programme committee, and these guys have been working really hard for getting the presentations into the plenary sessions, that is aside from the Working Group presentations so it's the presentations you are going to see for the rest of today, tomorrow and some of the presentations on Friday. Now, what they asked me to do, at the meeting that we had in Vienna, was if we could get some more feedback from you guys to understand what you want to see and what interests you here and also the people watching on the webcast, so what we decided to do was get a rating system going, just for the plenary. Now to make this work, you are going to need to log into the RIPE NCC website via your RIPE NCC access account. All the information is on the website, how to do that. It's pretty straightforward. I am logged on. When I am logged in, and I go to the programme plenary sessions, I will end up at this page and I have got the ability now to rate the presenters. Greg, I already gave you a ten but I am going to have to take that back down to zero until I see the presentation and then I will give an honest rating. You can only rate one per attend see and you do need to?ing logged in. It's only for the plenary presentations, and the results are only going to go to the PC, we are not going to public these results in any way, who got the highest or lowest but the PC will look at these ratings and it will help them gauge what you guys are interested in. And it's a trial session ?? initiative, so we will see how it goes this time around and probably work on it for the next meeting. Do get involved and take the time to log in if you have got a RIPE NCC access account, otherwise you can create one. This information is all on the website. Another feature that we added, again to help facilitate you guys meeting each other was the ability to add a photo and/or e?mail aaddress if you so wish. Now, again, this is all optional stuff. If you want to carry on the way you have been for the last 63 meetings, that is also fine. You don't get to get involved in any of this. If you want to, the option is there and as you can see quite a few people have already uploaded photographs and they are willing to give out e?mail addresses. You do need to be logged in to see this information. If you want to hide that you can click on here. Again, it's a step ? another step for us in trying to help you guys meet each other especially those newcomers are here. We will consider doing other things in the future. I would be interested in hearing some of your feedback about this and I am sure some of the NCC stuff would like to hear your feedback about this.

Part of the 20th, the RIPE NCC 20th anniversary celebrations, we are running a quiz that goes through the week. I am sure a lot of you already picked up a piece of paper that is available at the registration desk. There is 20 questions covering the 20 years. You have got to put a year to the questions. So you have got about four days now to complete it, entries need to be in by I believe 2:00 on Thursday. It's on the sheet. And the winner will be announced at the RIPE dinner on Thursday night. The winner will receive an iPad. So again, if you go to the registration or the info hub, they will have the information for you and some sheets that you can fill in.

I think that is all I wanted to say. Did Ior get anything? OK. So with that, done, Todd, could I please invite you to the stage. Todd Underwood is the current Chair ?? are you the Chair? I don't know if they have one. He is the Chair of the plenary PC so Todd under good of Google.

Todd Underwood: Thanks for coming. I have just a couple introductory comments about the programme committee and then we will get started with the presentations for this plenary session. RIPE has not always had a programme committee; it's just started as Serge just pointed out and our purpose here is to make the process of submitting content to the plenary open, transparent, welcoming and hopefully create a process that produces high quality content. If not creates a process that fills the space between the coffee breaks. High quality, if we can't achieve that then fill the space. Sort of in that order. We have done somewhere in between the two for this time, we will have to look for your feedback on that.

On the ?? the various members of the programme committee are all here, and in the future we should probably make them wear special hats to be more distinctive so you can find them, or give them feedback as to which presentations you liked and did not like. But the programme committee is linked to a bunch of places, Filiz is over there, Sander is over there. I don't know where Brian is, Andrei. Anyway ?? Daniella? And Jan Zorz is our local representative and we usually have a local representative.

A couple of points I want to make about submitting content. Please do it and please do it as early as possible for the next time. One point that was not clear in the past; we are only going to be voting on presentations that have draft slides. We found that presentations without slides are impossible to rate because we have no idea what you are proposing so please submit draft slides. And the last point is that the programme committee will have one slot open for elections, almost every meeting for the next indefinite future so there is going to be a programme committee Aleksi Suhonen this is RIPE, I believe we will hold that Friday in the morning, so we will send out a mail announcing the process for nominating yourself for programme committee but we have one slot, Daniel is leaving us because his term is up, so if would you like to join the programme committee and do some of that work, we would appreciate it. With that said, are there any questions about all of the bureaucratic nonsense? Excellent.

So, we have Greg Hankins presenting now. He will be telling us about pushing the limits, perspective on router architectures.

GREG HANKINS: We tend to talk a lot about protocols and software issues and we really talk about hardware and some of the underlying design things that are going on at network conferences, so I appreciate the opportunity to bring you a perspective more from a hardware architecture than than from software and protocol and I had a lot of fun working with our hardware engineering team so I really appreciate being able to bring some of the things they are working on to you.

The key challenge that we are really facing is that from a look up and forwarding perspective, I like to remember the old rule: Solutions are good, fast or cheap. Pick any two. That is the same thing we have. We are trying to design enormous dense and complex that handle complex multi? ?? we have the v4 deaggregation uncertainly the and the IPv6 adoption that makes it hard to predict how big we want the tables. We don't know how big is appropriate.

On the other hand, we are trying to design very high density line cards. As 100 gig has come out we are still in the generation of 10 gig based architectures and going to see 100 gig in the next couple of years to provide high density 100 gig. I hope that will have eight port and maybe even higher density in the next couple of years.

100 gig is an interesting architectural challenge because just from a packet processing perspective, we have to look at a packet about every 6.72 nanoseconds so that really change the scale of look up times that we have to deal with as the packet goes to the hardware. So we have to make a lot of choices and between system design, the available component technology and then the forwarding architectures and port densities and when it gets really hard if we throw in the kitchen sink, so too complicate that had reduces the port density on the routers that we are making.

If you look at just a basic forwarding architectural, every router looks more or less like this. We have system RAM that typically lives on the management card and line card. The system RAM is scaling pretty nicely, it scales like your PC RAM scales so we are not really worried about about any scaling issues there. The packet look up memory we are trying to do at least end times 150 million for 100 figure and I want to talk about the buffer memory for 100 gig port we are need at least 200 of transmit and receive into the memory.

Some memory technology first and then some very cool ASIC technology coming in the next couple of years.

The real challenge with memory is that packet rates have greatly exceeded the memory random read rates, that is TRC, the road cycle time. What we want is one nanosecond and also read and write rates for buffer memory. But there is all these constraints that we have inherently with RAM so RAM is random access memory, that's correct, but that doesn't mean that you can randomly in the same place continuously read and write in the same location. We have something called bank blocking which means you have a wait a little bit of time to read or write a particular location, again and again. So you can't actually like I said, read in the same place over and over again. What this does, it adds forwarding late ensee, we have to buffer packets as they go through the box. I hope you can read this chart at the very bottom but the solutions that we had today are around 15 nanoseconds for commodity DRAM, we can use something called RLDRAM to give us about 20 nanoseconds read latency but what we want is one nanosecond and as you can see, the 6.72 nanosecond 100 gig packets is a real challenge because we need single digit rates in memory.

From our requirement standpoint we want something that is fast and big. We have to store a lot of things in the packet look up memory and we have to also procession very quickly. 10 gig was only 15 million packets per second, I say only because it was really easy for us to do a packet every 67 nanoseconds, if you remember the D ram access times were about 48 nanoseconds that really falls in the feasible clock cycles. Put multiple packet processors on a card and we have to do multiple look ups per packet so a layer 2 and 3 field and maybe some access lists or QS policies or something like that. Not only are with dealing with a packet every 6.7 but also multiple look ups. On in terms of the size requirements, the tables are getting bigger and we are putting more things in the hardware that has to be stored in the table, MAC addresses, the FIBs, Unicasts, whatever you can think of we have to put in the packet look up memory and then we also have to deal with wat a buffer is. So, from a buffering perspective, a gig of ram is only 80 milliseconds at 100 gig line rates and that is not not very much. What we want is something likes like an elephant cheat at that that is big and fast but that doesn't actually exist, so we have to build things out of the available technology that we have.

If you look at a basic overall of memory technology, there is two things or two architectures, TCAM and RAM based solutions. TCAM, the primary function is to do a search. It has very reasonable access speeds but the probably is that it use as lot more power and costs a lot more and the sizes are a lot less compared to, say, commodity DRAM, each of those has advantages and disadvantages in terms of size, cost per bid, power consumptions and the road map that we have. We have quite a good one in terms of memory technologies that will give us a lot of options.

There are some very interesting memory technologies that are very specialised so for example, the graphics memory has fantastic bandwidth and throughput into the memory because graphics is very intensive but the production cycle on that component is very short, so as an example, go buy a graphic card this year and in six months or a year from now. You can't do it because the graphics industry is moving so quickly that those cards are obsolete almost by the time that you buy them and you walk out of the store so we can't do those kind of things on cards that we are put in production for five or seven years, we can't use components that move that quickly.

Longest prefix matching mechanisms, basically there is two options: TCAM based and some sort of RAM based combination solution. TCAMs are very simple to deal with because you ask the CAM is this thing in the CAM and it tells you yes or no. If it says yes it gives you an index to the next HOP table. Search latency is very fixed, very easy, you don't have to worry about any fancy programming, look it up in the CAM and it's very easy. However, it does have a lower capacity and, as I mentioned, power consumption and cost is much higher compared to SRAM based solution where you have to basically implement a tree and then also a special ASIC to do the tree operations, insertion and deletion and look up. The real problem is because it all depends on how the tree turns out, so you could have a routing table that is very evenly distributed or lumpy and creates a tree that is lop side sod you have to worry about that and a lot of search algorithms are proprietary and patented so we can't go and build a tree without worrying about the patent so that creates another challenge. What you have to do is implement everything in an ASIC to insert and search the routing table, so it does scale a lot better but there are trade?offs.

The act our I think what we will see in the future are a divide and conquer architecture, use a bunch of parallel RAM or DRAM combination so is we can do the searching in a reasonable time. What we are really going to want to do in the future is to actually integrate, look at memory into the packet processing ASICs, the technology that we have today doesn't allow to you do that but it will in the future. We have to have these components available to us for at least five years. It's huge issue when we have a component change on a line card where we have to integrate a new component on an existing card, it creates a lot of issues. We want to avoid that.

For buffering, we do have what is called DDR4 which will be a faster memory technology coming but again, the same with look up. I think eventually we will see custom buffering chips had a integrate all the proprietary buffer management techniques and the actual RAM into one chip. It's feasible to do that now and thing in the future we will see manufacturers moving away from just using commodity DRAM and building custom buffer chips.

About ASIC technology. The challenges, it's really a multi?protocol world, we have a lot of different protocols and packets and things we are dealing with, think of all the combinations, layer 2 v6, Unicast, PB B, now thrill and MPLS and who knows what is coming, we have to implement, look up functions and forwarding functioning for all those protocols and hardware. The other thing that is really complex is link aggregation and load balancing on all those links. We have to look deep into the packet. In terms of MPLS for example it, doesn't make a lot of sense to load balance because there is not a lot of diversity there so you actually want to look past the layer 2, past the IP, mast the MPLS and into the layer 2 or 3 VP L payload to get some diversity to get even distribution so that takes a lot of processing time. We are doing more things in hardware with accounting and especially with some of the Ethernet OAM protocols with hardware time stamps and you want to have a ton of counters. So for everything we want to count we have to allocate some place in the hardware to store that counter and we also have to be able to increment that counter.

Integration. That just means putting more stuff into an ASIC; the more stuff we can put into a chip the more scapable we make the system. The process geometry, the packaging limits what we can do today. The more chips we can use, the higher the power consumption and more components mean a lower over all meantime to failure so it creates challenges for us. What we are doing today uses 45 and 32 IANA mere technology, we are see it moving to 22 Nan metres in feasible and we have higher speed rape, right now it's between 3.125 and maybe 10 gigs but we have higher 28 gig ASIC interconnects which will allow us to do a lot more. Our 100 gig card actually uses over 100 integrated circuits so 100 components on one card and that is becoming unmanageable for us from a design perspective.

If you look at moores law it turns out it's still holding true, the number of trance cysters doubles about every two hours and if you look at the process geometry that has been available, it's held mostly through plus or minus a couple of years. A lot of the design aspects of ASICs are proprietary to a vendor so you never want to ask a vendor certain things about their ASICs but I can give you an idea. Based on the number of transistors and the frequency you can see in the past ten years we have gone from 130 nanometre technology ?? from 52 ?? 19 with 52 ?? .52 billion transistors, now we are designing technology with 4.8 billion trance cysters so that is quite a jump. We are increasing the frequency and the amount of memory that is on the ASIC as well.

This is the real cool part that I find interesting to talk about. Right now, we have what is called a 2D single chip process, you have a chip and there is one ASIC on the package and it does the one function. What we are moving towards is called 3D packaging where we have the ability to stack multiple ASICs in the same package so had a that means is that we have in packet processing ASIC and a memory ASIC in the same chip and if you think about the interconnects with a 2D single chip package you have chips placed beside each other and copper trace that is go between, with a 3D act our the traces are in silicone and they could be just a couple of microns thick so from a latency perspective that gives us a lot faster signalling between the chips which allows us to do a lot more things. Obviously, there are some problems because every solution is a trade?off but the heat density in particular is going to be a challenge when you have 2D ASICs they are spread over the board and easier to cool. When you have 3D because there are more things stacked together those will generate more heat in one place that we have to dissipate. I think we can do it. Sheer a picture at the bottom of an IBM 3D chip, it looks like a regular one except they are multiple ASICs in that package.

So the current process technology, and this is why there is a delay between the process geometry and what we can use. As I mentioned, we are moving to 45 and 32 nanometre technology but you can see from this table that 22 nanometre technology actually came out last year. The problem is that Juan new process geometry is developed, the component vendor is focused on transistor density. Some of the Intel processors are using 22 nanometre technology for their CPUs but we can't use that in our ASICs because we are missing what is called the logic library, some chip to chip signalling, memories and all these things that are developed after the general processors are developed. We will see 22 nanometre probably in the next couple of years as we are able to integrate those into our parts.

From an integration perspective, I think this is really, this picture shows you the challenge we have as ASICs become more complex, as we add complexity we have to decrease density. We have tried to keep that density line increasing but it's getting challenging because the component density is so overwhelming, the top card is a oversubscribed layer 2, 3 switch card. It's relatively easy to build those kind of boards. If you look at the bottom it's incredibly dense, in terms of the ASICs that we have to put on there and we had to use the mother board and daughter board that goes on top that also has ASICs, it's incredibly difficult to integrate. We have signal integrity and routing which is how the components talk to each other and there is a lot of design issues trying to make sure that the signals are able to talk to each other and that there is no corruption and we have power and heat dissipation issues. So it's a very challenging puzzle to solve.

So there are some alternatives to FIB scaling. As I mentioned before, Moore's Law is holding true. The industry predicts it will for several years using different mechanisms. Threw there are smarter things that we can do. There is a couple of papers and a couple of ideas that are circulating. The one paper from Arizona is just basically a very academic paper on really common stuff. It's written by PhDs, overcomplicated. It's basically talks about simple things that we can do to ago grate the FIB in software so we put less stuff in the hardware. The other one is an IETF draft simple aggregation and that says if you have a default route and you have a prefix that comes in with a same next HOP, just suppress it from the FIB so it's really simple. There is a whole other industry called open flow and Softwire design networking that could totally change how forwarding tables are populated. I think there is a lot of hype right, right now but it's doing something. Protocols like LISP which abstract the forwarding.

I say five to seven years, I think that is a good minimum, build high density 100 gig cards, we are trying to figure out how big to make the hardware tables, it has to scale for the global routing tables. The requirements are what we are dealing with in trying to trade off complexity and cost, the technology we have today and what is coming in the road map, all have to be chosen very carefully and the challenge with hardware is once you put something in an ASIC you can't change it so the software is really tell of easy because you can go and rewrite but once something is put into an ASIC, it's there for eternity. Components that are coming I think we will be able to scale router density to meet the requirements that you have. In terms of multi?100 gig packet processors and custom DRAM, packet buffering and DRAM based lookup memory and the 3D technology with ASICs AS memory in the same package will give us a lot of flexibility and options and that is my story.

Questions?

AUDIENCE SPEAKER: Geoff Huston. We have currently seen the v4 routing grow at around 15, 20 percent per year, which is within the bounds of Moore's Law seems comfortable. As a vendor, what growth rate would you consider to be frightening?

GREG HANKINS: I would say a growth ?? so we have to think about how long it takes for a vend tore make a board, so it takes us probably 18 months to come out with a board. So really what I think I am scared of is if we see a growth rate that gross so rapidly before we have a chance to come out with new hardware and then that is when we are in a bad position, if that happens so we are trying to stay ahead of the curve, all vendors are are, just by design routers are multi?million FIBs, one, million, two million, seems to be a good number but we are just taking what we can do with how big we think things should be and we are guessing.

GEOFF HUSTON: So 90 percent would be a worry?

GREG HANKINS: Yes. I think if we saw anything greater than 40 or 50% I would be really worried.

GEOFF HUSTON: We got to 400,000 entries in the routing table a couple of weeks ago so you guys are all professionals out there, you really are hammering this problem. Thank you.

(Applause)

SPEAKER: You guys are very easy on the presenters. This is good. I encourage future presenters to come up and pepper their presentations and speeches with false facts. There were seven errors in Greg's presentation that were carefully inserted on purpose and none of you caught them. They were just honest mistakes? Next, Richard who is going to talk about MPLS, Auto?Bandwidth his slide format is carefully borrowed from ran SIS and it's my fault.

RICHARD A. STEENBERGEN: So, I am going to talk about MPLS RSVP Auto?Bandwidth.

So start with a quick recap on what MPLS traffic engineering is and why you care about it. Classically, the way a network has worked you use an IGP to figure out the best path using link costs so you have a circuit from here to there, you assign a link cost of 10 and another of 20 and a shortest path first algorithm runs and finds the lowest cost based on those metrics. Traffic engineering takes this concept and adds additional constraint, find the lowest cost path that also has available bandwidth. To do that, what you need to do is measure the bandwidth that is being used between points on the network, make a reservation track circuit that bandwidth is writing over using a reservation system and then deny additional reservations when remaining bandwidth is insufficient and hopefully find a different path. You might add a higher cost link, in order to find sufficient bandwidth you might have to go up 5 milliseconds but that is better than congesting a link.

The way this is done with MPLS, the bandwidth is measured, the MPLS label switch path. A label switch path is a one?way virtual circuit from point A to B and each is configured with the amount of bandwidth that will be travelling across it and the RSVP protocol preserves the bandwidth for that LSP on to an individual circuit and removes that amount of bandwidth from the total pool. It's a quick cap, how it works.

So, you need to understand how you configure the bandwidth on a particular LSP, how do you about go determining what that is because IP networks are dynamic packet switched, how do you go about making this determination. Two main ways, first is off?line calculation and second is Auto?Bandwidth, so off?line calculation is something that happens off?line outside the router, usually by some third party tool or script and this is how RSVP was rolled out originally and still the way it's commonly used with a lot of carriers today, a lot of earlier adopters and large networks still do this. And Auto?Bandwidth is a system of doing this modelling on the router itself, so the router does a calculation, measures how far traffic is going across and uses that to populate the data and really what we are talking about here is Auto?Bandwidth and how to make it work, what the issues are with it.

Some more analysis into what off?line calculation is V Auto?Bandwidth. The advantage of off?line calculation you can implement any kind of a algorithm, there is very expensive third party software that you can buy from a wide variety of vendors that will do all this modelling and planning and modelling based on what it thinks your bandwidth will be, calculations on, disaster planning, what happens if I loose this link, what will the network like like and do this calculation and tell you. The problem with that is you have to either write the software yourself and it's fairly complicated or buy it and it's fairly expensive. The advantage of Auto?Bandwidth it can /STPOPBD changes a lot more rapidly, so off?line calculations expect very stable traffic patterns, they say if this circuit peaked at 1.5 gigs yesterday we expect to to again today and follow a very simple curve and be predictable. Any kind of unusual traffic spike /TEPBZ to screw that up. So that is a severe disadvantage. And Auto?Bandwidth is also a lot he issier to implement because it's there, turn on the nobody on your router and it's free but you are constrained to the algorithms that your vendor provides to do the calculation.

So here is kind of an example of what different intervals, 24?hour measurement cycle V a 1.5 hour. Obviously, the more granular you can get it the more efficient, you are not wasting bandwidth reserving for something that doesn't exist or bandwidth that you might spike to. You are able to adapt the changing conditions a lot more quickly.

So, how does Auto?Bandwidth work? Well, it's a very router specific behaviour. There is no protocol involved. The protocols are involved in signalling this bandwidth but the actual Auto?Bandwidth process itself is entirely unroutered and just router specific how it wants to do that calculation. It just so happens that Cisco and Juniper implement them in much the same way, written by the same people.

So Auto?Bandwidth performs the following basic steps: You have what is called a statistics interval which is an amount of time that you measure bandwidth over. So for example, you might configure a statistics interval of 60 seconds. That is the raw sample at which the router looks at LSP for 60 seconds it, says between this and that how much has my counter increased. This is my average rate across the 60 seconds. Then you have an adjust interval which takes multiple sample intervals and says what was the largest sample that I saw during that time. So for example, you might have an adjust interval of five minutes which is five different samples of five different 60?second samples and what is the largest value that I saw, largest statistics interval that I saw during that? That is going to be at my adjust point, the bandwidth for the next interval. So, if the change is larger than the user can figure an amount, an adjust threshold which might be 5 percent or whatever you choose to configure, then you say now we are going to resignal this LSP v RSVP; that means you are going to tear down the old, build a new LSP, make before break that try to do this very smoothly, new a new one and shift traffic on to the new one and you don't have a disruption to the network. What you are doing every time you resignal bandwidth, MPLS has no concept for changing bandwidth, you are tearing down the LSP and building a completely new one dynamically.

So here is an example of when Auto?Bandwidth works really well. Here you see a bunch of different ?? bunch of different statistics intervals and each bracket is an adjust interval. So in the first one it say, this is the highest point that we measured so the next bandwidth reservation is going to be this, it sets that bandwidth reservation and traffic is below it. That is pretty much what you are going for. Here is an example of where it doesn't work so well. If your bandwidth, if your adjust intervals are so wide that your bandwidth is changing between them, for example here we measured a very high spike and high reservation and then traffic drops off. The next time we measure based on what the highest point was in that cycle and then traffic actually increases, that is what you don't want to see. You don't want to see traffic changing due to natural whatever traffic is going to do, quicker than your adjust interval can take care of it.

There is another feature to how to work around this called overflow and under flow and, this scenario, in this third one when traffic starts going above what the bandwidth reservation would be, overflow detects a measurement higher than our reservation, artificially trigger adjust interval sooner than we normally would. If the difference between the signal, the measured LSP is off then it forces an adjust event. Overflow detects an increasing amount; underflow detects a decreasing amount and you need both of these because if your traffic is increasing you need to add more reservation but if your traffic is decreased because it's moved you need to pull that off your reservation, so you really want that advantage.

The goal being to allow the operators to configure a long adjust interval and still react to the changing conditions. A lot of vendors don't support underflow ?? Junos just added ihad as of 11 / 4 but any older code does not support underflow, only overflow.

We get into where it goes wrong and the problems you are going to see in this are. So RSVP is good at adapting to conditions such as circuit failures. Anything where your network topology changes buts very bad at adapting to changing traffic destinations. In your traffic was previously destined for router A and now B that is something it's not going to handle very well, it's got to detect the change in LSPs because you are not moving you are moving which is being used. That goes into the whole adjust interval cycle. For IP network, an EBGP all the traffic moves to router B, RSVP is going to have to go through an entire cycle to detect this and a circuit flaps in your backbone, so for example, say your US example, say your router A was Los Angeles and your B was San Jose, and you loose a backbone circuit in the middle, now makes it topologically want to go to the other destination, even if you didn't lose an external peer because your I, it affects your BGP calculation and that changes your traffic. Every time this happens, the old LSP has got to be sides down and new sized up which you are really trying to do avoid is congestion on your network, you have got this adjust interval that you are trying to avoid that.

It turns out that overflow and underflow cause a lot of problems, like I said before if you are overflow detecting but not underflow detecting, you are actually not reclaiming the bandwidth on your backbone so you are now maybe creating inefficient routing, forcing something to go all around the world because it doesn't need to, bandwidth is available, just the destination has changed. From my practical experience, it often doesn't create any real detection optimisation at all. It turns out if you weren't going to exceed the minimum threshold of your ?? you weren't going to resignal, it doesn't accomplish anything. You are not resignalling any less than you would have, you could have set your adjust interval lower to begin with instead of doing it this way.

Another big problem is that LSPs don't create themselves. Like a lot of other protocols MPLS is not entirely automatic. There is no protocol that goes out and discovers where you need LSPs, you are supposed to ?? build full between every device. So typically what happens is you either buy some commercial software or write something yourself. Or in some cases there is some vendors that offer automesh which is you feed it a very simple list of routers and IPs and it will build a mesh of from me to everyone that is not me out of that list. For example, Cisco IOS, you do that with an access list, it finds itself in there and builds the full mesh. But that also leaves you with no way to control specific LSP configuration, in IOS case if you want to remote a note you have to take down the entire ACL and every dynamic that generated. There is a lot of GOTCHAs that come along with using those type of scripts to do T

Another problem: You can't fit down a large LSP do you know small pipe, it can be moved as an atomic unit, it can't be split across the two, you if you want to split the traffic you need two LSPs, if you have a relatively large LSP, efficient ?? relative to the size of the circuits you have got a packing problem. If you have three 6 gig LSPs going across two 10 gig circuits, going down 20 gigs a pipe you should be able to fit it this but it turns out if you are looking at these as two individual circuits you are not. What will happen is six gigs into one and six into the other and the third LSP won't have a home, it will have to find some other path or may mot be filled at all and you have got a serious problem. You can't map this LSP anywhere.

Another example: Say you have a mix off a backbone of 10 figure and 2.5 gig circuits, 3 will never be able to fit down so you must make your LSPs smaller than the smallest circuit that you are going to be route be traffic across. So, the classic work around is to create multiple parallel LSPs and load balance your traffic across it, instead of having three 60 LSPs, you could have nine 2 gig LSPs, now you are able to more efficiently pack these 2 gig LSPs into these type of circuits. Another problem with automesh, no vendor supports any of this and you can also run into problems such as maximum number if you have to split 16 by, or, depending on your vendor, it can be as low as 4 or as high as 16, split into large numbers, you may find you are not able to have ECMP take advantage just because the router code doesn't support handling that many.

You have got to look at how Auto?Bandwidth behaves under stress. What happens when RSVP can't find any bandwidth in your network. So just for example, considering that previous example that third LSP needs a home. Say you don't have anywhere on your network to put this. It's not always clear what is going to happen and it depends on different vendor and in some cases different code and different cases within a vendor. Sometimes what will happen is the LSP will simply not be updated even though traffic has increased so say for example off 2 gig LSP that has grown to 3 gig it will say I can't up ?? I am going to leave it 2. Even though your RSVP is only reserving two and you are using three you are potentially congesting a link and this is a bad situation you want to avoid. Under other conditions it will get torn down. So think about what happens in a parallel LSP scenario when it happens, say you have 8 bi?LSPs and one of them can't reserve the bandwidth that it wants so that LSP goes down, all the traffic shifts over to the other 7 and their traffic levels increase. What is likely to happen is one of those won't be able to find a reservation and so sometimes you will see this pathological scenario, collapse all the way down to one giant that can't fit anywhere and can't resignal on its own and you have to come in manually and clear it.

So some of the techniques that people toss out for working around this. What if you have do some kind of intelligent oughty mesh script, on or off the router, whatever you do that looks at the traffic and says I detect my traffic level that is increase add certain amount. Just automatically fork it so you may set a threshold two gigs where if you reach it 3, now into two by 1.5s. That could be done with Juniper event script. I have dob some testing of this but there is a lot of GOTCHAs around this, too. One problem with newly created LSP, the Auto?Bandwidth is zero even though the router will immediately put traffic on to it. Same scenario as before, you've got unaccounted for traffic on your network, you you might not have a 10 gig circuit that is ?? you are dropping packets. Scenario you really want to avoid and can potentially lead to the pathological LSP collapse earlier I was talking B

Another big issue is what happens when Auto?Bandwidth faces congestion on the network? The problem is Auto?Bandwidth doesn't know anything about congestion. So, imagine for some reason such as any of these examples that a link becomes congested and RSVP doesn't know about it. That packet loss on your network, your IP traffic is going to go down and Auto?Bandwidth can very easily learn this new rate and adapt and think everything is fine. You might a have a circuit that is full and dropping packets. If there is any kind of measurement issue you might have a circuit that your RSVP is doing 10 gigs when it's trying to do 12 and can't and doing 9 .5 and RSVP thinks no problem, it's all fitting. Routers that can't see layer two overhead, any Juniper sear ease router doesn't see it for the purpose of accounting. A 28 UDP packet sent in IP consumes 84 bytes over the line by the time you add in your headers and inter frame gap and padding and all the fun stuff, the router doesn't know this, if you get a denial of service attack with a lot of small packets you might have all of a sudden that the router thinks is doing five gigs but is completely full and congested and RSVP is not going to know this and may keep putting additional traffic on to a link that is already full.

So, some practical suggestion of how to use all of this and get something done. Most of the time RSVP?TE Auto?Bandwidth actually works pretty well but there is a lot of different conditions out there where you need human intervention to kick it out of a pathological issue. Don't bother using overflow especially if you don't have underflow, if you don't you are running the risk of hitting, not reclaiming your bandwidth and being very inefficient in your bandwidth use. Be aware of a lot of inter bugs, there is a large number of them for certain vendors who have recently starting tweaking their bandwidth code where they haven't touched it in many years. There is some new mall calculation or issue or LSP that thinks it's measuring 5 terabits and ?? you can't just stay completely hands off. For most part it will run itself you still need careful monitoring and humans available to kick it and fix the issue when it happens.

So, my suggestion to everyone is to nag your vendors for the following: Make sure you have underflow support as well as overflow if you really want to use it, Juniper has already added that as 11?4 but classic IOS as far as I know, as far as I know doesn't support it. And probably a lot of other vendors doing MPLS implementations don't. What would really be smart with an adjust threshold minimum in bytes as well as percent. For example, you don't necessarily want to resignal every LSP that changes from 256 to 512, you can sit a minimum of 1?meg but then if you have got a large number of LSPs you have got all these phantom minimum reservations fluttering around. You might not want to do a full resignal and rechange, the traffic changed 10 percent and 50 MEGS or something along those lines if you have got a large network so if you had adjust threshold minimum in bites you would be able to reduce a lot of unnecessary signalling events. There seems to be ?? needs to be better build in automesh capabilities, the ability to selectively change one particular configuration of an automesh node doesn't exist for a lot of existing automesh implementations. You really, in order make this scale you need a feature to fork an LSP when it reaches a certain size but even Bert you need a way to kind of hide that so, just as an example what you might want to do if you had a 20 gig LSP is have a system that dynamically forked an incorrect LSP next HOP behind it so instead of trying to make that 20 gig it would take that and break it into as many chunks as needed dynamically but not show that to you and show route, not make that a part of its ECMP calculations and hierarchies and different architectures so you would get advantage of all all these fork LSPs but not having your entire screen schroll by with 50 entries just to have one route entry in a show rout. There needs to be the ability to set initial bandwidth reservation for new LSP especially if you have any kind of script that is going to come along and fork a new LSP and you need an operational mode command to manually adjust the bandwidth of an LSP so if you see an LSP that is quite off you could do requests, this LSP resignal X bandwidth, change the bandwidth value manually and have scripts and tools come along and do this auto forking for you and still get the bandwidth values correct.

And that is basically it. Any questions?

SPEAKER: Any questions? You got off easy, thanks.

(Applause)

SPEAKER: Once you are concluded and you are reading of the entire text of each the slides you do have questions, please do send those. We are now done with this session and we are at a break now, we will return in just over half an hour for the next session so thanks very much.