These are unedited transcripts and may contain errors.
MAT WORKING GROUP SESSION:CHAIR: Okay. Good afternoon. Welcome to the MAT Working Group session. We've got quite a packed agenda. We're probably going to have a bit of a discussion at the end. In a second we're going to have Amund talking about mobile broadband but there's a couple of administrative things. I'd like to welcome you on behalf of the chairs. We're one down because Richard decided he'd rather go on honeymoon, but we all congratulate him on getting married. I'm sure he's watching the webcast. I want to thank in advance the scribe, Jabber monitor and stenographer. We're probably going to have a bit of a discussion at the end but any Q and A, can you state your name and affiliation. We did put the minutes up on line sometime in Jan. I don't know if anybody has had any comments or whether we just ?? it seems silence is consensus. If anything has anything to say, I'm welcome to take it now. Okay, thank you.
We made one change from the agenda that was published on line which was we moved Daniel's talk about the RIPE NCC measurement strategy to the coming years to the end. We're going to be followed by a buff. It may be the discussion runs into that a little bit. Hopefully we can leave a lot of time for the BoF. I've been told there is beer at the BoF. Hopefully the discussion won't get too heated when we add alcohol. I think now it's time for Amund to tell us for measuring the robustness of mobile broadband in Norway.
AMUND KVALBEIN: Thank you for the introduction. I'm Amund Kvalbein. I come from similar reserve laboratory which is in Norway. And there I run a project called resilient networks. We do network resilience. So that's where I'm coming from. What I'm going to talk about today is ?? oops. This slide is empty. It's not supposed to be. I'll tell you what's on it. Much thanks to you networks with every where and they're important for society as a whole. In the old days different critical infrastructures in society operated more or less independently of each other. If one failed, the other was still operational. Today, most of our critical infrastructures in society depend on networks in one way or another. So over the last few years only in Norway, we've had examples of network failures that takes out water supply. Network failures that shut down hospitals more or less. We've had network failures that take down the phone failures that puts planes on the ground and stops all the trains. As a society we're department on networks. So network resilience is a most important issue. This does not only go for fixed networks. This picture is from the 10th of June last year in Norway. This was a special day in many respects. This was a big like vacation weekend in Norway, traffic was high. There was was a major flood happening that day. Another thing that happened that day was that a bug in an SSCTP from a major equipment vendor led to a signalling storm in the mobile network of Telenor which is the largest one in Norway, it took down the entire network, the mobile network. Voice mobile telephoning was down for 12 hours during this crisis. It was kind of a scandal in Norway. Telenor received a lot of bad attention, the government had crisis meetings and all that. This illustrates one thing, and that is that mobile networks is one of the things that we just depend on being there. And as a mobile operator, you can say, of course, but hey this is not an emergency network, you cannot really depend on the mobile network always being there. But then again, customers say, well, we do. So give us a better network.
So if these networks are so important for us and for society, what do we know about their robustness? What do we know about how they fail and how frequently and whether we can rely on them? Well, we don't really know that much. So, of course, the operators know very much about their own network, but as a customer, if I'm a customer who runs a service that is department on this network being up, what can I know about how much I can trust the network? Or if I'm a regulator trying to set up the rules for how this should work, what do I know about how this works? Well, I get reports from the different operators, but those are quite limited. And also if I'm an emergency organization or something, that really needs network access, what can I know? And the answer is, well, you can't really know that much. You request experience, you can talk to the operators, but no one has, so far, tried to measure the reliability of these networks. And that is what we are trying to do. This is just a side note that doing measurements of not only resilience, but other properties of these networks, we are, of course, not the only ones doing this sort of thing, so this session will be full of different measurement initiatives, M?Labs, RIPE Atlas, several regulators are organising these type of measurements for fixed broadband connections, and there are many other projects in academia and other where.
Let me tell you a little bit more about our measurements then. So this started off a couple years back. Because last year there was a trial in Norway with electronic voting during the regional elections and one part of this system, I won't go into details but one part of this system is on line, in the sense that when you come to your voting location, if the voting location is not on line, able to speak with a server, then you cannot cast your vote and you have to fall back to the manual voting. So the people in charge of this e?voting, came to us and said how sure can we about that our voting locations will be on line on election day? And of course we had to say, we don't know, but we'd love to measure it. That's how we started the measurement project. We placed in each of these ?? there were ten municipalities participating in this trial, each of them has a number of voting locations and in each voting location we placed a measurement node, which was back then just a normal Dell laptop connected to whatever is there of fixed network connection and in addition to three different mobile broadband operators in Norway. And we did active measurements, so ping through all of these connections, every five seconds, over months. And then we characterized, how frequently do ping packets not come through? So that's what we ?? so I'm going to now over the next few slides I'm going to show you some results from those measurements. The first thing I'm going to show you is an illustration of another big event that we had in Norway last year. This is May 23rd where there were two capable cuts in the backbone of Telenor network and what this figure shows, here's Telenor's, here's Netcom. Here's our measurement nodes. The green ones are operating nicely. The yellow ones have some packet loss. And the red ones are completely unusable. The white ones are not active. Quarter past three, there's a dual fibre cut in Telenor network within five minutes of each other. See what happens to the nodes. Basically, the entire Netcom network went down and it was down for hours, whereas Telenor network had problems, you can see packet loss increasing and you can see some nodes going down. They were more or less able to maintain some kind of service through it. So this was basically just an illustration of how this measurement can be used to see the extent of an event like this.
Okay, so if we look on some more figures: This shows a CDF of the downtime. So on the X axis here are all the different nodes. The X axis is down time. Here we can see, for instance, that 60% of all nodes in this particular network has a down time that is less than 0.4%, right? So it's difficult to read this, but if we draw a line here, we can say approximately one in three connections has a downtime of more than ten minutes per day. I would say this is more than it should be. Then you can say, okay, these months were special, you had this large flooding that I told you about, this dual fibre cut that I told you about. You can't look at these nodes. What happens if we take the same three months, May to July last year, and we take out those days with those particular events, and this is how the graph looks like. It improves somewhat, but if we go back and forth, the picture is more or less the same. And that tells us one important thing, the main pain that you feel as a regular user out there behind your 3G modem are the many small events that happen every day. You lose your PPP connection, something goes down and you just unplug it and put it in again and it works. It's those small things, those are the ones that contribute most of the down time.
So let's dig a bit deeper into this. Let's classify these nodes into three classes, the good, the bad and the ugly. We'll say the 50% best nodes are good. The next 25% are medium, and the worst ones are bad. Let's have a look at thousand these are distributed across. The first I'll have a look at is, so over time, how much time does a node spend in each of these three categories? Sometimes good, sometimes medium, some days you're good, some days you're bad, some medium. So except for a few nodes, these are two different networks, some of them are always bad, but most nodes, they spend some time in the good category, some time in the medium and some in the bad. Who are the good and bad monitors? Varies over time. These are sorted, of course. The best ones are to the right. Let's look at it in a slightly different way. These are the same monitors again, but this time the top network area is still sorted in the same way, the good, the bad, the best ones are to the right. And here we have the same sorting, so that one monitor ID here is the same monitor in the other network. And what we see is there is a very weak correlation between these. So the fact that you are good in one network, say Telenor, does not necessarily mean that you are good in ice. And that is, in one sense, good news, because it means that if you can connect to both of them, there's a different chance you'll get a better service. This is another way of looking at it: Here is the same, but this time we look at this over time. So again, two different networks. And here are the weeks that we measured and we take all the nodes here and we say, okay, the best 50 percent in that week are green and then yellow and red. And then we say how does this change over time? Each node will keep its color from the first week and we'll see how this develops over time. So we can see for this provider here, ice, we can see that the ranking that you have is somewhat stable. I mean the red ones are typically stipulate the red ones in the later weeks. But for the other operator here, Telenor, the picture is very different. After a couple of weeks, it's impossible to say who were the good nodes in the first week. So basically sometimes you're good, sometimes you're bad, you're all over the place. And again the same point that I want to make, if you are able to connect to both these networks, if you are able to connect to both ?? to multi home to them, there is a large room for improved robustness because they have such different failure patterns, they fail at different times, they fail at different places.
So these are a glimpse into some of the other measurements we did. This measurement infrastructure I presented is not there anymore. The elections are over and our infrastructure went down with it. But we are now building the next version of this, which is going to be a more permanent infrastructure in Norway. So last fall we invited all mobile broadband operators in Norway to join us in this project. Some of them said yes, some of them said no. Two of them will join us in this project and give us subscriptions and some financial support to do this. Our goal is to place 500 of those nodes across Norway. And initially all of them will be stationery, they will be at a fixed location. But over time we hope to place some of them on vehicles, basically.
So we are not going to use laptops this time, we're going for something cheaper. We're testing different types of hardware. Right now it seems we'll be using this, it's called a BeagleBone, it's running standard Linux. We'll connect it to a US B hub with modems from all the different operators. That's going to be our nodes.
So what are our road map, future plans? We're not going to ride into the sunset as this slide might indicate. This spring we are basically testing different types of these nodes, making this work. We're also implementing systems for gathering the data, visualizing it. Our goal is to have a semi?realtime map of this up and running so that you can follow the status of all these different networks in close to realtime, with some minutes lags.
And then in the fall we plan to deploy this. And then once it's up and running we're going to start working on mobile nodes, mean putting these measurement nodes on things that move. That's our plan for the next year, basically. So my last slide, how do you really measure robustness? What are the good metrics? So far we focused only on connectivity, do you have end to end connectivity over this 3G connection? That's simplistic. As an end user there's many other things you are also concerned about. How stable are you? What kind of quality do you get? What is the delay of this connection? Are you able to sustain like some application over time, like a VoIP call? Are you able to maintain a service as you move? These are all different aspects of robustness that are interesting. And we're very interested in hearing your opinions on what are the most relevant metrics to measure, and what are the best ways to do this? There are many challenges in measuring these types of networks. There are only so many things you can see from the IP layers. A lot of the mobile network itself is hidden, it's just tunnelled through. So if you have views on this, I'd be glad to hear it or on email, send me an email. And that's it. Thank you.
CHAIR: Thank you very much, Amund.
(Applause.)
CHAIR: Does anybody have any questions or comments to make on this? Martin?
AUDIENCE: Martin Levy, Hurricane Electric. This is a great test because it's done as an end?to?end user, it's the real thing. But without going to the telecom companies, how ?? is there any way to measure whether it's the local 3G radio signal or the back hall, or the infrastructure, wherever the back hall goes to? There are multiple components. And you are testing what an end user experience is which is fantastic. Is there any way to even guess at that?
AMUND KVALBEIN: Right. It is difficult. We believe we can make guesses, as you say. To confirm this, we need to work with the operators which we hope to do in the next phase. You can look at things like correlated failures between nodes in the same area, that's one way that you can see, all these three went down at the same time, they're connected to the same base station, probably the base station. Or 50 percent of my nodes went down, probably a central fibre or something in the ??
AUDIENCE: Something that correlates the two.
AMUND KVALBEIN: And there are things like I have a delay that builds up like this and then suddenly drops. You can say something about what causes this in a queue somewhere. You can do these kind of things with limited certainty without getting data from the operators.
AUDIENCE: Two follow?ups: Your data is public or was made public when you did it?
AMUND KVALBEIN: We are hoping to make it public very soon. For the next round, our goal is definitely to make it public. But there are a couple of operators involved in this, and we will have to agree with them basically.
AUDIENCE: I know Geoff is at the other mike. The graph you showed with sorted green, yellow and red data. Don't go back to it. It will be quick. Have people used that in other areas? Does a really estate agent use that, where, property values from ?? that's a rhetorical think about it. The sunset picture, that's really Norway?
AMUND KVALBEIN: No.
AUDIENCE: Okay. Thank you.
AMUND KVALBEIN: Land of the midnight sun, but no.
GEOFF HUSTON: You said how should you measure robustness? Is it the methodology or what are you trying to measure? And you've got a very interesting set data and it really is quite fascinating, but you're not the telephone operator. In actual fact trying to ?? I see these things, therefore it means the operator's done X is irrelevant. What you're trying to measure is the quality of the delivered product and irrespective of how they deliver it, that's not your problem to fix mistakes. Interestingly what users do is drive traffic across the network. If you're saying robustness and service as a quality and attribute of these delivered networks, maybe you should be using those probes more than as ping targets, but as delivering a service. One thing we found with trying to measure v6 is it is interesting there are two parts to a connection, there's the client and there's the other end. If you become the other end and get these folk to open a TCP connection with you, then the client doesn't necessarily need to measure the quality, the server can, because it's the same connection. And you can get an awful lot data if all you're doing is getting these clients to send you TCP sessions once every X minutes because the server is going to give you RTT, the server is going to give you stability of RTT and Jitter. What you're really after is how well do data applications perform? And then you get into in actual fact that's TCP and how well does TCP perform? It performs better when you have networks that are stable, when the RTT is stable, when the Jitter is low, etc.. so I would actually suggest there's just this other bit, if you could get these nodes and servers and start instrumenting the server, you've actually got something that is useful to your regulators, your consumers and all the rest of us to understand what you're doing. This is cool.
AMUND KVALBEIN: Very good point. I was perhaps not clear about this, but those servers that we do the pings to are our servers, we have both ends. And we want to do other things than just pinging.
GEOFF HUSTON: That's the point.
AMUND KVALBEIN: We want to do TCP, it's a fantasy that limits.
GEOFF HUSTON: Do what users do, down ?? the TCP state gives you so much more information than just a ping.
AMUND KVALBEIN: Very good input, thank you. We will try to do this. And part of the reasons why we've only stayed with ping for this first set of measurements is the subscriptions we had were limited. We could only send limited amounts data because they were given to us at a very good price with this caveat.
CHAIR: Thank you very much.
(Applause.)
CHAIR: Next up we've got Vesna Manojlovic to give us an update and presumably advertise the BoF later on as well. As this is going to be linked into the later BoF, it might be a good idea if you have any discussion to hold that over to later. Anybody who's not here who wants to hear more about the RIPE Atlas project can be involved in that.
VESNA MANOJLOVIC: I'm going to talk about RIPE Atlas presence in Ljubljana. See we're not afraid of dragons and we're every where. This is a RIPE Atlas probe as photographed by one of our host and volunteers before he plugged it into his network. My name is Vesna Manojlovic. I work at the RIPE NCC and I build community around our measurements. RIPE Atlas is one of our platforms and RIPE Stat is our interface to all the data we collect as RIPE NCC a short instruction to RIPE Atlas for the newcomers among us. So Atlas is a platform for active measurements. It consists of small hardware probes as you saw which perform continuous measurements from 1500 points currently around the world, and these probes are hosted by volunteers. However, data is available not only to the hosts, but to the community in general. So anyone can access the data from RIPE Atlas. You go to our website and you see beautiful maps that we publish and you can see very detailed information about each probe and down load the data in several formats. And I'm going to cover only three new aspects today, and that is joining the RIPE NCC access, so the users of RIPE Atlas and L portal can merge into one single sign on platform. User defined measurements, and finally a pilot that we're announcing, giving access to measurements to all RIPE NCC and not only hosts of probes.
So when we were in beautiful caves, I got quite inspired by the idea that all those single drops of rain that keep on travelling through the earth and contribute very little by little to the formation of these magnificent things, so very small contribution of Atlas probe hosts are contributing to creating this wealth data that we can use for the reserve later on. So hosts are contributing with their enthusiasm, with electricity and a bit of broadband ?? bit of bandwidth of their ?? for example, home connectivity or wherever they plug it into their networks. If you want to contribute more you can become a sponsor and you can decide where to put the probes. And the users are also contributing with their interest in our data. They can down load the data and analyze it on their own. Mostly we have reservers for now using this but we are hoping to make general public also interested in these results.
So who else is benefitting from all this? We are hoping to interest more of the RIPE NCC membership, therefore we're announcing this pilot later on.
Talking about RIPE NCC access. In order to get most information from RIPE Atlas, you need to log in. Now, until now you had to create a separate account to log into Atlas, and now if you are an LIR and you're logged in to the portal you can go straight to Atlas and you already have access to RIPE Atlas, too. And because of this unified access, we know who is a member of the RIPE NCC so we can create additional member benefit services for specific category of users. So we just started working on this, and, again, I'm pre?announcing the announcement of the pilot and it's coming up.
So the user defined measurements are going to be discussed in technical detail in the BoF, and right now I just wanted to say what you can do as a user, you can specify a type of measurement that you want to perform, you can specify the target, where do you want to send pings or trace routes, and where do you want those measurements to come from? So for now you can specify a country or specific probes that you would like to perform those measurements. And the next two features are that you can select an AS number or a prefix from which you would like those probes to be selected. And finally we select the measurements, the probes that are actually going to do measurements, we schedule them in order not to overload the system, and therefore we also introduce the credits system so all these measurements cost certain number of credits. Depending on how many credits you have earned by hosting probes or by being a sponsor, you can perform certain number of measurements. Again, in order not to overload the system, we have daily limits, and so sometimes it can happen that you would like to perform more measurements that are technically right now not allowed. If you are very interested in making huge number of measurements and these limits are too small for you, talk to us. Let us know and we can increase the limits individually for you. And we are already planning to increase this systematically for everyone, as long as we see that our systems can cope with the load.
And finally, the pilot for this meeting is we want all of you to become millionaires, just as we all were in the old Yugoslavia. So if you are an LIR and there is still about 15 places left in this pilot, so we want to start really small, so 20 LIRs that don't have probes yet, can apply to us later in the BoF or in email or on the RIPE Labs or on Twitter, whatever you want, and then we're going to add you to the list of LIRs who have access to UDM even if you don't have a probe. And our goal is to allow all the LIRs, all the RIPE NCC memberers to perform because they are effectively sponsoring RIPE Atlas. So you get million credits. Now, that sounds a lot, just like this looked a lot of money. But actually that is how much every host of a probe earns if their probe is up for one and a half months. So also taking into consideration current daily limit, that would enable you to max up your spendings for two months. So you can perform two months of measurements with this million.
And the next announcement is a BoF, people have heard about beer already, and I have good news and body bad news. So this is Atlas beer, it's really, really strong, you can buy it in some supermarkets in Amsterdam, and we have actually shipped a lot of it to Ljubljana, in order to make a beer BoF out of this. This is news. Some people knew about it. Good news, there will be beer. Bad news is, it won't be Atlas beer. On the other hand, we have two cans of slightly less strong Atlas beer, it's only 8.5, the other was 12. We are trying to encourage people who did not plug in their probes, to return them and if you are, today, here and you want to return your probe because you change your plans and you don't want to plug it in anymore and you got it a years ago, we'll give you one beer in return. I only have two minutes and I have so much more to say. These are the organisations that have been sponsoring and are sponsoring RIPE Atlas until now. If you want to see your logo up here, talk to me later on. If you have any questions, save them for the BoF because now we don't have time. You can also see the picture here of one of the developers of RIPE Atlas. He took this seriously. He's holding the whole globe on his shoulders, and me too. These are the email addresses for your questions later on.
Now, RIPE Stat. Sorry, I'm still not done. I'm just starting with RIPE Stat. We'll run over, that's fine.
Statistics, status, stat. Apparently, in American English that means fast. I'm not so Americanized. What is RIPE Stat? We show you the measurement, we show you the data as it is now or the historical data from all the collections that we make. So from all the RIPE NCC address registry data, all the other RIRs, RIS, DNS blacklists, and we are planning to add more data sets in the future. The new features are, because of the access, single sign?on, RIPE NCC access, now we know who you are also when you go to the RIPE Stat. So you can personalize your page. You can choose which widgets you prefer, which ones should be on the top of the page and so on. A widget is one of the views. You can embed it or put it on a T?shirt and these are the T?shirts you can win if you fill in the survey. Fill it in and at the end of the meeting, tomorrow, we'll make a draw for some of these T?shirts.
The RIPE database now has this little button. When you query for address ranges or AS numbers you can go to more info from RIPE Stat and it will take you to RIPE Stat. And we have also the graphical interactive widgets for browsing the RIPE database objects. So we have had a demo at ?? I'm not going to stop. I'm not finished yet. So we had the demo already on Monday. And it is all self explanatory and very playful and try it out for yourself. This you saw already, who can see the historical information, who can see the personal details through the RIPE Stat? Privacy patrol controls have been discussed. Questions? And more information. And that's the end of my talk except for this last slide. All the photos are either from CERN or the people listed here.
CHAIR: Chair any questions about RIPE Stat? No. Thank you.
(Applause.)
CHAIR: Next up, George Michaelson who's telling us about how they've measured the Internet. I don't know if he's going to tell us how much it weighs.
GEORGE MICHAELSON: When I grow up, I want to be a scientist. So Geoff gave you the content full talk about the things we found out about the world and I'm going to give you the content?free talk about how we got there. It's a really good question. When people say I'm measuring the Internet today. It puts me on the spot. What do you really mean when you say you're measuring? I had to do a bit of thinking and get the ideas going. Thought I'd get a few more eyeballs, get some mind share and think about it. I think it can mean a lot of different things, the old host cost count, Bill Manning pinging every host in a week and pinging every host in a month and realizing he was never going to be able to ping every host so it depends. There's the route connectivity, but you can cover an enormous addresses with announcements. Routing doesn't necessarily tell you how big the network is. It doesn't have to be the only dimension. Then there's the application side. If you're in voice, you want to know how much of that traffic is voice. You might want to know how much of it is voice traffic that earns money. Questions are very contextually defined. It's multi?dimensional. Think about the context. If you're into routing look at routing data. If you're doing voice data, and that takes you to a sense that people who have the data tend to target the interest groups. Websites are attracting their own users, that's fundamental, but some of these things, they're not really shareable. It's valuable data, it's not something people want to put out there.
So we looked at this in APNIC reserve, the labs and thought is there someone who could do an end user measurement, a view of the net about what takes place at the end user browser, what can we put into the system here. It's this question, who's doing the user measurement? Looks painful. So what are the attributes anything to be a way to measure IPv6 capability at the end user? Has to involve something that has that quality of being everybody where. We're not interested in just a narrow measure, we want the whole footprint. We know it's going to deal with customer premises equipment that can't donate I have v6. It has to be flexible, hop over the problem, get out of the hole in the middle. We also had a sense from initial measurements that measuring your own website, it's the first place we go. I'm the same as everybody else. Actually our community isn't. We're different. We're technically lit rat and in the middle of the system. We don't have a lot of the problems. We didn't just want to measure ourselves we wanted to get out there and measure the rest of the world. And there's a community, a small community of people that have been doing measurement user JavaScript, tickling a little few extra accesses, that never get seen. If you do this the right way using wild card DNS you can get a unique idea of the event of them saying I want to get this pixel that never gets seen. You can drive the whole thing in the browser. It's the same as website tracking that we've been doing for quite a while without the cookies and looking at a different side of it.
Okay, the web, we're all professionals and know how protocols work. Let's go back to time when things were simpler. So any protocol that anyone would have used back when we had hair, involved a server listening, the client connects and you get this textual 200 okay, at which point you type a command, it might have been SMPT hello but something gets sent in text, etc., etc.. then you're done. Let's exclude FTP. That's pretty much what all of them were. Let's come fastforward in time and the web has become a sophisticated complex place, a rich complex environment. Things are looking a bit different now. The I first thing is there's a sense there is a middle part of the story. You go to an URL in your browser, maybe you've already been there, so a whole bunch of stuff gets done, and then the middle part says there's 50 as etc., I'll get them in parallel, why not. You don't do one fetch, you do five. And then some of the stuff comes back and it's programs and flash and bits and pieces so there's logic going on, deciding what to show, whatnot to, you might never end your dialogue on that web page. You might walk away from the whole thing. The world has got a lot more complicated. This parallelism, it's kind of the middle ground that the measurements are operating in, it gives you the space in between what users see and what's been fetched on the network where can put the operations, let's do a little bit of measurement. The nice thing is that the way a lot of this asynchrony and complexity, there's a program language, it has all the capabilities you need, sprites, timers, string and number processing.
Okay, what does a Java script measure look like? Let's get under the covers and see what it does. The idea is that you imbed in your website some JavaScript, it might be in line in a script pairing or fetch of an asset, that tells the JavaScript engine, go and run. So the engine spins a generator, comes up with an idea for experiment and does a series of fetches. It might be told to fetch a thing under a domain name and that name has dual?stack BINDing. At this point we don't know if you're going to go on four or six. That's the experiment. Then it might say go and get a six?only label. It's a name that is bound ?? see that different intermediate domain it has six, if you're a four?only browser you're not going to fetch. Or it might have a literal. There's no DNS in the loop. There's something that suppresses behavior if the DNS returns an AAAA. There's the aspect of let's take the DNS out. Then you can use the asynchronous part to compute the time and at the end we do this trick with one more fetch but nobody cares about the image, what happens is we're logging what you asked for and the name of what you fetch tells us about the data of all the other fetches. Okay what do we get from this? Well, we've unleashed the monster the the DNS in question, there's only one nameserver, it's the one we're running. We know the revolver that's serving that unique idea. It's unique. No one else knows because it's a wild card BINDing that was made just for you. We have the revolver path and timing info. Then we get TCP dump and we get to see all the packets you sent to the web, all the DNS queries, maybe some tunnel points, we do things way down low and look at packet behavior, inter?packet arrival times, there's a lot of information in there. And we've got all web logs of all your fetches and this gives us a huge amount data that we can cross correlate and the nice thing here is because we span a unique number, every one of those items can be co?related. We can take the idea of your v6 with its name and the v4 and we know it's the same experiment. We can go into the packet traces on the addresses and look at the packet behaviors and we know it's you.
Okay, so there are some things we noticed that are kind of funny. We notice that you come back with a result and the tests come in later. That's kind of strange. And we've noticed that sometimes you never tell us a result, so as far as the client is concerned, this thing didn't run to completion, but we know every one of the tests got run. That's kind of strange. And then there's weird lags in there like minutes later, bong, packets pops up. Where did this happen? There's kind of a heuristic thing with where the results line, the client's view, we have to step beyond that and say we can work around differences in the way the data is coming and get a broader measure by doing clever work on what's out there. We're thinking about the implications of sin thin sizing the results, this isn't all a one?way street but it's a richer model than some of the other approaches people are taking. Doing this JavaScript we've been getting 50 to 100,000 hits a day. Pretty good. Recruiting people to embed the JavaScript on their website. It wasn't global or covering enough of the world. It was skewed by the particular people that go to a particular web relationship. And we wanted to get it up global, we wanted to get to the bigger space, but we put all this effort in JavaScript and didn't want to walk away from that. Can we leverage this? Okay, if people won't do it for free, how about a way to buy it off them? What out there can you get things for giving what does history tell us about ways of attracting eyeballs on what you're going to do? We came to this place, ad networks of investment for 20 dollars a day we could buy 50,000 views of our measurements. This is absolutely fundamental to how the web works these days, websites that are basing their behaviors on earning revenue are motivated to display as many ads as they can to as many people, and not only motivated but systems built up to make this efficient. They want your money, our money. They were keen to take our add and show it to people. And there's a nice simple model about how to pay for it, built on a trust model, if I show this a thousand times and on average I can get a click, that's one click per thousand and if I set a price and you trust me we're all good here. If you pay me more money I'll go to places that get more clicks. Good scheme. Except we didn't want the clicks. So the basic mechanism of doing these adverts is Flash. Flash is the ubiquitous language for putting a monkey in somebody's face and move the cursor and hit the monkey. Flash has become the vehicle that is the advertising vehicle. It is basically a way to get an image in front of someone. The easy way to do that is use Flash as an engine. So the economy for advertising is about writing flash and that was good for us because action script the language of Flash is very very similar to JavaScript so it's the clear mechanism for us to take our investment based on JavaScript and get it working quickly. Okay, everything has problems. Life has problems. And the problems we faced is that by default, constructing a flash advert drags in all of the link library, like you only get the routines you call, in Flash you get the whole library that you link against and the advertising networks don't want you to include random number generators, basically because they want to own the mechanism for identifying click trails against your ads. If you have randomness, you can use this to do a direct cookie exchange. They want to exclude the random. But they have to give you a lot data, which is the unique URL which you are going to be sent to when you click on an add. Although we didn't want the clicks they had to give us to. Not using the random call which gave us a mechanism to get around the lack of a random function. We can regenerate a proofable unique ID fiduciary each request to imbed in the fetches of the one by ones. There's another problem, what we're doing looks morally like cross side scripting. We had to include the standard permit controls this is an acceptable cross side fetch and that means we get a fetch in order to get a fetch. We get twice as much traffic as we would need. In a way that's useful. The real killer problem is that flash won't run on one of the significant platforms we'd want to measure. If I can contextualise this, since iPhones at the moment don't do v6, this is a small constraint, but it is a floor in the methodology. So placement questions arise, are we really comfortable that we're getting fresh unique eyes on our ad. We thought we'd do a basic count. Who was coming to our advert? This plot is showing you the two different kinds, you can see along the bottom a flat consistent serve of unique entities and a fairly flat sloping line. At 20 dollars a day we get a consistent feed of 50,000 and that's fresh IP addresses each time. And this is true for both mechanisms. We found that very interesting. If you think about it, honesty in the advertising network demands that fresh eyeballs are placed in front of you. It's the fundamental that you don't repeat the ads to the people who have already seen it. This is evidence that they're an honest broker. How do they actually know? They just ask themselves. We also collected which AS, that is a different story. First of all, you'll notice that the lower level which is giving you the daily counts has that kind of pattern of behavior. And you'll also notice this is true in six as well as four. So we have this behavior ?? I'm only showing the combined data for six. But the daily totals has some behavior. We're calling this the week end droop thing, that people go off line. What's happening here is JavaScript is being put on websites where people aren't visiting on the week end and we're counting real traffic. Whereas flash has a slighter flatter profile, the red line, because they're sampling in a far higher population. We're being served fresh eyes all the time. We do some work with AS. 25,000 AS numbers have been seen in the v415 and 1500 in the v6. We feel this is actually a very significant footprint against global coverage, half to three quarters of the global AS pool of origin AS. This mechanism is a good representative of global coverage, we think it has legitimacy for the global footprint, which means we can make observations that are applicable to the global network. Also doing some RTT stuff. Dealing with the data we have a data reduction cycle. We do a combination against all of the web logging from our multiple heads and then a reduction phase to combine the data so the combination of four and six pairs is one?line entry and 800,000 experiments a day and add in economy, and the prefix and origin AS from the logs.
Okay, I'll skip quickly here. I promised I wouldn't talk results and stick to methodology. No, that shouldn't be there. If you want to know what we're finding, go and look on the web where we present breakdown of all the classes data. We put up the post?process data that we're using for our presentation. It's all available there if you need it. I would also observe that we know this doesn't agree entirely with the measures that Google are presenting from their own collection and we think that's interesting, it might be that the different techniques are exposing different aspects of the v6 behavior. So we present some graphical views and as a tribute to our hosts I want to show Slovenia is tracking three times higher than world average on v6, it's the kind of commitment you get from a cohesive economy that understands the capital investment to be in the right place. This is free, the people that are doing six RD and although this looks like a flat line story it's interesting. We very quickly got to a point where we're sitting at 16% of available of preference, not just availability of six, preference of six. That's amazing, that's way ?? that is world best basically. It's lumpy, we know there's variations in here, doing some AS work and finding out the transit relationships and Martin Levy should be smiling at this point. v6, it's slow, story not as good as it should be. We think it has legs and we think it's going to take us to a good place for assessing this behavior long?term and we like the kind of development, the outcomes, we think it was a good rate of return. A lot of collaboration went on to do this. We were very department on our three?core partners who gave us the mine share and technology basis but also a basis for conversation, a big thanks to Google, IC and RIPE. And you want to see more, that's where it is.
CHAIR: I'd like to thank you for sticking to the charter of the working group. Any questions?
RANDY BUSH: Privacy. Speak.
GEORGE MICHAELSON: We respect privacy and do not have any knowledge of where they went.
RANDY BUSH: But you're collecting. You're no better than Google, in fact.
GEORGE MICHAELSON: The advert says thank you for helping us measure v6.
RANDY BUSH: You're collecting more than that.
GEORGE MICHAELSON: We don't know the identity,
RANDY BUSH: My home IP address doesn't change.
GEORGE MICHAELSON: I think it's a bar discussion.
RANDY BUSH: I don't drink.
GEORGE MICHAELSON: It's a fair question. It's not a real question. It has to be considered. The class of information is worth the risks and we make a strong statement.
RANDY BUSH: I'm willing to justify the privacy but the end justifies the means isn't a good argument.
GEORGE MICHAELSON: I do feel we're within bounds.
CHAIR: Okay. Thank you very much, George.
Up next we have an update from RIPE 63 on M Lab. We have two presenters for this, Meredith and Tiziana.
SPEAKER: Is this working? Perfect. Okay. I am Meredith Whittaker and I'm here with Tiziana Refice. We work on Measurement Lab and we're here to tell you about Measurement Lab and broadband measurement. I'm going to start, first off with a little bit of history, just to give you some background on what we're doing and how this happened. It's been kind of a 'meem' in the network reserve community for some time that a good source of solid robust data on broad network performance was hard to come back by, difficult to collect, costly and being collected and thrown away and this duplicative science was happening where you weren't having an ecosystem of good data building on good data. This meem was a weather as networks became more important, more crucial to have access to these resources began to be expressed by people who weren't reservers. Policy makers were looking for good data that found robust policy. Consumers buying network access and not quite sure what was happening when something was going wrong. And a chorus of questions started to arise in the community, how do we know? And it was in 2008 that Vince Surf, who is very plugged in here, decided we needed to address this and gathered a number of network reservers and stakeholders for a summit in Mountain View and just sat down and for two days said let's figure out what to do, how do we get this data. There were arguments and there were arguments and coffee breaks and there were some jokes and at the end of two days, people were kind of tired and had agreed on a solution and that was Measurement Lab. And that was really, this was something that was created out of the reserve community in the face of an issue that everyone was feeling. So how do we do this? How are we providing data? I'm going to leave you in suspense for one moment while I emphasize here that while we're both from Google and represent Google, this is not a Google project. This was supported by Google and had funding and support to catalyse this effort but this is something built by reservers, supported by the community and responds to the needs of the community. We have here circled the newest partners which include Ragnar from Altibox, thank you for that server, and a number of others, you have regulators, reserve institutions, private companies, all of them understanding the importance of a good open source data.
So before I turn it over for her to go into the technical details, I'll give you the high?level picture of what it is we do and how we provide this open data to different constituencies. At the physical foundation broadly performed platform across continent and specifically designed to support good broadband measurement. On these serves reservers deploy open source measurement tools and these are required to be open source. Wherever these servers exist, these tools are deployed identically, and consumers can access them. Again, that's another requirement, these tools do not test node to node. These are required to be consumer facing, we want to make sure we are getting this data to consumers when they need it. These are client?initiated tests. Say, I will go whatever means I wish to access these, one of the 11 tools, I will access it, run a test, find out some information about my network in realtime that informs me at a level I can understand. Each time this happens, this happens about 200,000 times a day, data are collected on the server and put into the public domain. Again, that's another condition. If you run a measurement tool on this platform you need to agree to release all the data. And we're not talking about aggregate statistics or anonymized data, we're talking about real raw data about network performances across the globe. And now I'm going to turn it over to Tiziana and she can go into the technical details of how we implement this.
SPEAKER: Thank you. As she mentioned the first component is the silver platform. It's a planet like it's specifically designed to run accurate broadband tests. One of the issues is we deploy this platform consistently around the globe. This allows us to compare data collected in different locations. We have a number of countries which already have their own effort to measure broadband in their own countries. This data and the data we provide can complement that you can use to compare your broadband connections and performance with other countries. Another important characteristic, critical of this network is we make sure all the servers are well provisioned and this is important that the server infrastructure is not the bottle neck of the measurements. And finally back to a point that Geoff made before, we made sure to instrument the network, to instrument the servers to collect reach information about every single test. So when you run a test, you don't just get high?level information, but full details about all the TCP connections running on the servers, in particular using a specific current implementation called web 100 and I'll show you a sample of the data we collect by using this instrumentation. This is a picture that shows the current status of the network. Back in 2009 when the project started we started deploying in north America, then we expanded in Europe, Australia and Asia. At this point, since the last RIPE meeting our platform has grown more than 25 percent and I have to say really thanks to a lot of people that we sent during RIPE 63. The idea ?? our goal is to have global coverage and in particular in 2012, we want to expand to Canada, Asia and a number of other countries and hopefully with your help. So please come talk to us if you're willing to host one of the nodes. So as I mentioned before, we have this server platform and we have a number of tools running against this platform, in tick we have 11 tools. Users can access these tools in different ways. Some tools run in a browser, some in a common line, some are mobile apps, finally a set of tools that run in home routers, so users can take these routers and plug them into their home network and this router performs periodic tests. One of the strict requirements of running a test against the platform is we only run client server applications that only run active measurement tests. And this is to make sure that all the data we connect collect, we can publish that without any privacy concern. Another requirement is all the tools are open source. This, allow yous for example, having third parties taking the client side and create their own customised version. We have an example. They took the client side of one of our tools, integrated that tool in their own client. This allows bittering users to automatically tune their client by running one of the tools and this allows us to reach out to a population of users that is not just the technical user who would go to a website and run a speed test. As Marian mentioned before, we have an average of 200,000 tests a day. M Lab tools, two phases, consumer phase, you're a consumer, you run a test, you get some numbers that tell you up load speed, down load speed, high level statistics, that's what you can see on the left hand side of the slide. I put three screen shots, three out of the 11 tool servers we're running on M Lab. This is high level. That's what an average user might understand. On the right?hand side, I'm showing way more details that we actually collect whenever tests are run, in particular a number of tests whenever they run they collect pull, web 1 hundred logs and much more data. The picture on the right?hand side of the slide shows you an aggregate of all the variables that web one hundred log contains, you might recognise TCP related terms. In fact a TCP web 100 log fully describes a TCP state at every single moment of the collection. And we make all this data publicly available. So this really shows the full picture of what an M Lab does, how the data flows in the M Lab infrastructure. We have the server platform, a number of tools running as a platform. Whenever a test is run we collect data on the servers, every hour it's collected into the repository and we make all the data public. At this point we have more than 500 terabytes of data which is the by far the largest broadband data out there. Everybody can use it and access it. However, we do understand that it's really quite a large amount data. Not everybody wants to up load that much data. Not everybody can compute numbers with such large repository data. That's why we provide cloud?based tools you can use to run analysis. One such tool, it allows you to run queries against the portion of the M Lab data set.
Providing such a large amount data, as shown in the last two years, incredible results from the reserve community. For example, you can see on the left?hand side of the slide, a citation from a paper from Dave Clark, Steve Bower at MIT who looked at a portion of the data collected by one specific tool and they found a large portion of the tests actually limited by the receiver window. This is a piece of information is that we want to know, whenever a user has a problem, if it's with the network or with the configuration browser of the system. Then on the right?hand side of the slide, I have an example of reserve study from another university. They took the data from the another M Lab test and analyzed the user of T to look at things like traffic shaping blocking, and on the bottom slide there's another example by data collect by another tool which analyzes traffic shaping techniques. Now, I'm going to hand this over to Meredith who's going to show you more results.
SPEAKER: I'm going to bring it up a few notches but give you another example of the usefulness of these tools. Greece's telecom operator who built their own open source dashboard for their constituents and this uses M?Labs NDF and tools in a similar way to what bit torn is doing, repurposing these to serve their needs. It allows greek users to log in, run those tests and the data is collected and shown on a mapping function. This map gives people the opportunity to compare, how's my performance against other people's performance. How are things going in my area. Would I want to put an office here? Answer questions like that. It allows the greek regulator, if they see in this data, it looks like there's a real problem in this area, a number of users are getting results like that, they can look at M Lab data set and say is this happening every where or specific to our networks? Say they see a problem that seems specific to their networks, they can talk to reservers, I would love it if we could give you a grant to analyze this specific part data and understand not only what is going on but why and what we can do to fix it. This is what we're talking about with the need for transparent data and the need for regulation and consumer education based on transparent data because all of these people are stakeholders in networks at this point. I will give you another example now of again use of open data to disseminate canonical statistics and this is the FCC contracting with a private company Sam knows and they're running all their tests on M?Labs and all the data is released in the same way. And you have, in 2011, the FCC issuing the measuring broadband America report which is a big report that these are official statistics about the state of broadband, all based on open data for the first time ever. And this allowed reservers to say these are interesting, let me see what I can see, let me check these results. If I find this interesting let me dig deeper into the data that might not have been published. Let me communicate with operators, consumers, fleshing out the significance of this. So, again, this is, you know, communicating to many different levels, a very very granular level, addressing reservers who can make the recommendations to make the changes but also addressing consumers who may be the people who first feel an issue they may be experiencing all done through the M Lab ecosystem. Now, we come to the part where we address the audience personally. As I emphasized in the slide with all the logos, this is a community?run, community?based effort. It is here to serve a need that was expressed by the community, by reservers, by people who manage and run networks and by people like my aunt who loves Facebook games and couldn't figure out why they load. All of these people need to be able to access good data that is tuned to their level. We want your involvement. If you have questions, critiques, if you can host a server, if you can donate a server. The more, the merrier. This needs to be formed by the involvement of the people who have the knowledge and skill from across demographics and can put some energy into this. If you have an idea for a new measurement tool, if you have some analysis you would like to do on this data and you want our help parsing that, you can access it for free but we would be happy to help if you let us know. More involvement is key and we would love to take your questions and comments after the presentation. Here is our final slide. And you have here website address which will take you to a lot more information. And a charming marketing video does two and a half minutes what Tiziana and I took 17 minutes four seconds to do. This might be helpful if you want to communicate it to people in your organisations or people who haven't heard about it. And then you can see screen shots from our Facebook profiles and contact information. Please reach out and let us know if you have questions and we would love to take them. Thank you.
(Applause.)
CHAIR: They invited questions or comments. Anything you want to say or do you want to communicate directly? Okay. Thank you.
That brings us to our final speaker, who is Daniel, who's giving a follow?up to the RIPE NCC measurement strategy that was announced yesterday in the RIPE NCC services working group. This is going to involve a little bit of a discussion about the future of TTM and DNSMON.
DANIEL KARRENBERG: Hello, I'm Daniel Karrenberg. This is not going to be a talk about a report, but this is going to be a talk for guidance. As you know, the RIPE NCC and all its activities takes guidance from RIPE, and one of the tasks of the networking group is to guide our measurement activities. So I'm going to ask you questions. But Ian has already said it, we're a little bit short for time, so I encourage all of you, if you have comments, or if you want to think about it, use the mailing list. There's also going to be, if I'm correct, half an hour break scheduled before the BoF, so we can also maybe discuss afterwards or the people who would like to stay here can have a little bit more open discussion. So this is not telling you all ?? about all the great things that we're doing, but it's asking for feedback and guidance.
This is ?? the first couple of slides are a quick rerun of what I told in the services working group, why and ?? it's the why, what's our strategy. We got from the membership survey feedback that our measurement activities and statistics are not easy to find, and we decided to go from quite a number of fragmented things to a number of small number of focused things for the measurement activities. Labs for stories and general statistics. Stat for specific ISP things, statistics status, stat, and RIPE Atlas for the measurements. We also want to go towards some membership only content in all these things, in order to make it more defendable to be a member of the RIPE NCC and more useful to be a member of the RIPE NCC. So here's how things are ?? a very high?level block diagram about how things are organized and implemented right now in the RIPE NCC. On the right?hand side we have all these diverse services, on the left?hand side, we have collection infrastructure. So there's the RIS route collectors that collect GBP routing information, so it's the other big one. We now have the RIPE Atlas probes and the TTM boxes from the TTM project. And then we have a number of database backends, collection infrastructures, and then all these services on top of them. Where we want to be is these three things on the right?hand side, labs, stat and Atlas, where the shapes sort of indicate that labs is more like a blog and general statistic thing and stat and Atlas are user?driven things where you look at specific things. In Labs you get the stories and in stat and Atlas, you get the data itself. We built this internally and we're doing it for quite a while, actually, on an integrated data store which we call the INRDB, that has anything about Internet number resources, and our measurement results in there. And then we piped stuff in there from the collection infrastructure that I had on the previous slide, so there are the RIPE RIS route collectors and the Atlas probes and we're currently thinking about another class, RIPE Atlas anchors, which are bigger Atlas probes and targets. Geoff has, in the question and answer for the first talk, has actually already done the exposé for some of the goals here, and the goals are to be a target for measurements, if you're a target for measurements, specifically TCB measurements, you can gather a lot data at the target and the source, but at this case at the target. So that's one function of Atlas anchors. Another function is that it's a much more powerful RIPE Atlas probe. It's not like this tiny imbedded thing but it's a real machine and can do many more measurements. And we would want to deploy them in a way that they're also ?? their up time is more constant and predictable than these thousands of other things for the individual anchor systems. We also have all sorts data about Internet number resources, not in the least the registry that we maintain about the address space users, and we have also external sources. See, for instance, we're using the publicly available Maxmind data for geolocation and similar things. The idea is to put this all in a store and base the products on it, so that it's ?? the left?hand side gets sort of decoupled from the right?hand side, so if we have all the data, we can build products on it. We're quite a ways in getting there already. So just for this part, are there request questions or discussions about the strategy per se? Okay, if they're not there, use the mailing list.
So there are some notable omissions here, one is TTM in the new world, and we want to close down TTM because it was built for a different era, we're in a totally different environment, capacity planning is not such a high issue as it was when TTM was coming into being, and, in fact, the whole TTM infrastructure, including the back and front end is aging, and I hate it to say it but we didn't have a life cycle replacement program for this stuff and it becomes unmanageable at the multi?homing. We found out that the TTM users, mainly the time servers let you get into the TTM box because it has a box because... and they're quite good. Also some general interest in participating in measurements, the same thing that Tiziana and Meredith were paling to, there are some in our community who are interested in measurements per se and are generous enough to help with that. But what we all quite clearly got from the TTM users over the past month and before was that they actually want some not compatible measurements and what I mean by that is I mean measurements that not NOC staff understand and are readily acceptable in a presentation that NOC staff are used to, for instance, alarm sense. So we evaluated the options: Continuing TTM would mean a reimplementation and we wouldn't get some of the features the users wanted so for the alternative, we shut down TTM and satisfied the user requirements by a more light weight Atlas anchors. So if the TTM ?? current TTM hosts were interested in this stuff, we would hope would see the Atlas anchors as a replacement. So this is how we're doing it from the viewpoint of the current TTM hosts. We want to enable them to continue providing the time service. So basically give them access to the boxes, help them with the configuration, if that's necessary, and just keep the boxes up as time servers, if that's the main interest. And if they're still interested in some of the Atlas measurements, we know that some of the hosts have multiple boxes and use it for local measurements. We want to give them support to continue doing local measurements without the central infrastructure.
And we'll provide the best effort, help and hardware supply, for the specific hardware parts like the clock cards and clocks themselves, as long as we have stock. Obviously it's better to help people keep these things up, rather than have this equipment collect did you have the in Amsterdam. And we're going to develop the Atlas anchors, and so far the reactions from the TTM hosts themselves are positive.
So any questions or discussion on TTM shut down, development, so on?
AUDIENCE: Dave Wilson. Just one thing, I appreciate as a TTM host that TTM is not long for this world, and I get that. I couldn't keep going the way I was. My one regret, and I wish I was thinking about this since Vienna, but I didn't realise it was so bad, but if there's any way to salvage some of the great work that has been done, be that hardware that's already in place and hosted, or software or historical measurements, if there's some group out there that might be interested in taking some subset of that on. I'll keep thinking but I don't really have contacts in that space. I'd be grateful if you're interested in well.
DANIEL KARRENBERG: I'll take it from last to first. The historical data that's there, we're interested in preserving that and making it available. And as you know, we have a reserve data store at the NCC, and our intention is to put that there, as far as we still have it, so that that isn't lost. The software and hardware, the current thinking is to actually make the software available as is, so if people want ?? if people want to keep doing this stuff and they're happy with the presentation, then there will be a way, and if people want to start with this immediately, it will probably be under some sort of light weight non?disclosure thing until we really shut TTM down. But after that, my intention is, never mind the lawyers, but my intention is to make this as available as easily as possible under some sort of open?source licencing. And as far as the hardware is concerned, we are not going to collect the hardware back. So ?? but my analysis, when I actually looked at it, what's out there, the remaining life in most of the stuff is not great. And actually one of the reasons, really, to ?? that really pushed towards cancelling it, was there was a lot of hardware out there that you can cannot get replacements or spare parts for anymore and cannot even get some up grades that you really need more.
Dave Wilson: That's all good news and I appreciate it. It would be great if there were a single home, if there was some group out there that is doing something similar that might want to take it on.
DANIEL KARRENBERG: Get on to me or get on to Dave or get on to the mailing list.
CHAIR: We're already over time. But you have a quick question.
GERT DORING: Gert Döring, having two test boxes, I'm sort of fine with having the boxes run on until they die as NTP servers, well because we use the NTP and appreciate it. If they die, we would just get something new. The thing that is giving me more headaches is we're basing on S and A measurements that we guarantee our customers on the TTM network because that's neutral entity and that's sort of like making our lives much easier in documenting, A our network works, it's your machine that doesn't. So I understand that this should be possible with the anchor boxes, and what I hope for is that we can get one or two anchor boxes before shutting down the TTM boxes, well before, so I can adapt my scripts to actually change to the new data sources or whatever.
DANIEL KARRENBERG: That is the intention. That's the intention. Or keep running the Atlas things. So we understand that there's a few TTM hosts that have your problem. You're not the only one. And we pledge that we want to help you as well as we can to have continuous S and M monitoring if the boxes are suitable or to take over the thing locally if you really want or the income boxes are not suitable. If you are interested and I hear in helping develop the anchor boxes, then, yes, by all means.
Okay, I have two, three more slides and now I've run out of time. Not two minutes ago. Can I do my last two, three slides? So, of course, this is a domino because DNSMON is based on TTM, the TTM infrastructure, for collecting the data. So once that goes away, we will not have DNSMON in the way that it's currently collected and DNSMON is still very much used and appreciated. And we have actually some customers there and it's also used by lots of people in the community. The good news is that actually RIPE Atlas can do the same measurements today, and all those, almost 1,500 probes that are out there right now can actually do all these measurements. And I would say like a thousand of Atlas probes are better than ten TTM boxes, you get better coverage and you can adapt your coverage to your needs. I've already been told that ?? by one of the TTM customers, no, that's not good because if we have more results there's more likely to be some bad results inside there. So we will have to work on the presentation. But quite objectively, I think it's going to be better. The thing is, with DNSMON, you really interested in the vertical part of the graph, and the vertical part of the graph means there was a problem at or close to the server, the name server in question. So we're interested in these patterns, basically saying that some of those probes, each line is a probe, each of those problem probes had some packet loss here. That's what you want to know and that's what you want to analyze. Whereas you're not interested in this horizontal stuff here. It doesn't really matter all that much whether all these probes are the same all the time because you're really interested in when something happens to more than one of them. So I'm quite can have the that we can base a future DNSMON on Atlas and have quite similar even or better visualizations and data representations. And I encourage all the DNSMON users to actually actively help us develop that. And at the same time, what we want to do is to introduce some of the things that you wanted, like alarms and all that kind of stuff.
So I think that was the last slide. Oh, no. Oh, yeah, that's what I already said. Visualization, that kind of stuff. We want to actually do the presentation and access to the data inside RIPE Stat, because that's where we do visualizations and also it allows you to correlate the DNSMON results with BGP results and registration information and so on. But we need for this ?? we will develop a concrete plan for this, and we're really looking for input specifically from current DNSMON users and we'll do ?? we'll publish and have that dialogue on the MAT Working Group mailing list. That was the last slide. Questions specific about DNSMON?
AUDIENCE: Remote question that was posted on the mailing list: It's from... I've got a few comments on our use of the DNSMON which might be used ??
DANIEL KARRENBERG: This is a long mail. That's going to take four or five minutes.
AUDIENCE: We have been hosting a TTM box for a long time. It's the only server we've been using: It's the only wave we have to monitor over quality and reachability of DNS service. Specifically as many of the nameservers are in Anycast clouds, it's used from casual checking if everything is green down to specific debugging, even using the raw data. What matters to us is easily interpreted data and graphs with the ability to drill?down, historical information in the graphs and detailed raw data.
DANIEL KARRENBERG: Thank you. I read the mail and all that is something we want to preserve.
PETER KOCH: Peter Koch DENIC. We are a paying DNSMON customer and I know we have a firm end date for the TTM box date. I have not heard a firm start date for an Atlas?based DNSMON service and that in balance is of concern to me.
DANIEL KARRENBERG: We have also not said that we're going to stop the DNSMON service, only the TTM service. So draw your conclusion. The conclusion
PETER KOCH: That is not a satisfactory answer. I'm sorry.
DANIEL KARRENBERG: The conclusion is that we want to ?? we intend to continue the DNSMON service, maybe changed a little bit but have a continuance DNSMON service, that's the position.
PETER KOCH: I hear you say and read we intend to. I would rather have that firm commitment or the constraint there really accepted and appreciated. You definitely said we're shutting down TTM and you intend to continue the rest.
DANIEL KARRENBERG: We have a commitment, you have a contract for DNSMON and we're going to honour that contract obviously.
PETER KOCH: Okay. Thank you.
DANIEL KARRENBERG: Of course, if you get anal and said, absolutely, I want the same graphs, we'll get anal about the contracts.
PETER KOCH: I didn't say that and don't intend to.
DANIEL KARRENBERG: Good. Then we're on the same page.
CHAIR: Daniel mentioned the mailing list. I think it would be useful if you want to continue this, take it to the mailing list, and I would encourage you to respond to the post about DNSMON.
DANIEL KARRENBERG: I respond to each and every one of them, so I hope there will be a few more.
CHAIR: Maybe you could address some of the issues, a summary of this talk so we could start a discussion on that.
Is there any other business for the working group? If not I'd like to thank all the speakers, the scribe, Jabber monitor and the stenographer and I declare this session closed.
Thank you.