Archives

These are unedited transcripts and may contain errors.

DNS Working Group:
18 April 2012, 9 a.m.?10:30 a.m..

CHAIR: Good morning everybody. This is the DNS Working Group's first session for the RIPE 64 meeting. If you want to discuss address space, you should be downstairs, you shouldn't be this room. Before we get started, just one or two obvious preliminaries just to remind everybody. This meeting is being recorded and it's also being webcast so if you do have any comments to make please go to the microphones and make them because that's the only way that your words are going to be recorded and also for the benefit of people out there on the interweb, the only way they are going to hear the discussion is if the audio is being properly picked up and when you do come to the microphones, please state your name and your affiliation.

One or two other announcements to make. If you have got any mobile phones, please put them on to silent mode or something like that, it's rather annoying if phones start ringing.

The minutes of the last meeting were circulated just last week, my apologies for the delay in getting them out to the mailing list. I would like to ask if there are any errors or corrections on those minutes. Can we accept them as being a true record of our last get together in Vienna? I guess we can consider them as being accepted. Thank you very much.

We have got a fairly crammed agenda for both sessions today so we are going to try and move things fairly quickly. I would like to start off by having a first presentation which is being to be Robert Kisteleki and update what's been happening on the NCC's DNS operations.

I am from the RIPE. Acting as a messenger to give you the DNS update from the RIPE NCC. As you all know we are running K?root, one of the routes nameservers, operations of that have been stable with 18 instance at the moment, 5 global and 13 local nodes. We have been enabled IPv6 recently. That's been running fine. We are planning OS and software upgrades in year, we are migrating to CentOS 6 and NSD 32.10 I believe and we also plan to expand the network with some new instances later this year.

Two interesting graphs. One of them is the K route TCP query rate. Ever since the roll out of the signed route zone, we have seen a very stable 42 queries per second TCP query rate. 42 seems to be a nice number.

On IPv6, well apart from some disturbances it's just growing and growing, which is a good thing, we believe.

About DNSSEC, at the moment we are handling 117 signed zones, that's including forward zones, reverse zones, E number and everything that we manage in REN. One of them is still an island because the DS record is waiting to be approved in AfriNIC. We did the last successful KSK roll over in 2011 in November, and the next one is November this year and that's because we have up updated or DPS, the DNS practice statement such that we no longer do key roll?overs ever six months but we do everyone year.

DNSSEC growth in the reverse zones. This is data coming from the RIPE database so that's where you can update your data if you would like to. There has been a steady growth as you can see but we are still about at 1% of the total number of zones. So there is some room to expand still.

CcTLDs, we are running 77 ccTLDs. This is just slight decrease from the last time and we have moved this service to our Anycast cluster recently, which was in October. It's just before the RIPE ?? the previous RIPE Meeting. This also triggered IPv6 glues to be added to these zones, so that's a good thing. And we are running NSEC 3 on these zones.

NS.ripe.net. This is for reverse zones and it has been a local small cluster inside the NCC network for sometime and it has been moved to the DNS Anycast cluster as well just about a month ago.

What we have seen, this was a planned change. What we have seen is that the queries on IPv6 went down as expected in about six hours they were gone essentially, but queries over v4, they still continue and that's because we still have one IPv4 glue for the Cameroon zone. We are talking to ICANN about this and we are expecting that there will be an update soon and this particular IP address will no longer be reference so therefore the query rate will go down.

Two graphs about that. This is how the IPv6 query rate went down when we made the change. This is how the IPv4 rate went down and if you look at the big picture, this is what we have now. So around the middle of this graph you can see that we made the change and ever since then, the query rate has been rock solid. It's very, very predictable and we assume that that's only because of that particular zone.

ENUM, you will hear more details about ENUM, so the bottom line is business as usual here.

DNSMON, this is a service as you know that currently we provide about monitoring DNS. It relies on the TTM network and we would like to transition this to the future, well the existing Atlas network and you will hear more details about this at the NCC Services Working Group as well as in the MAT Working Group services is today, Matt is tomorrow afternoon. So if you are interested, please come and hear those talks.

And finally, this is about the 31st March. Some people were expecting something interesting to happen. Well we did something, we did see something well interesting and that was actually the query load on our DNS monitoring servers. Everyone was visiting just to check out if anything breaks. Other than that, not much has happened.

Any questions?

CHAIR: Any questions for Robert?


AUDIENCE SPEAKER: What are your criteria for running slave servers for TLDs and I am, of course, referring to the generic TLD programme of ICANN?

SPEAKER: That's a good question which I am not qualified to answer but there is someone in the room who is.

AUDIENCE SPEAKER: Daniel Karrenberg, RIPE NCC, the ?? let me take a minute. The secondaries for TLDs is an historic thing. We started doing that as the RIPE NCC when DNS was starting and many of these ccTLDs in emerging countries were struggling and Wednesday we were interested and the community was interesting in a stability of the DNS at the time, we basically said okay, the RIPE community will fund a help for those emerging countries, we have since, and I think that was about five years ago, had some push back from the RIPE NCC membership in ?? about paying for it especially for very developed TLDs. So, I don't think that it would be appropriate, under the current policies and membership sentiment, to provide secondary services for new TLDs. Because if you can afford $200,000 and more to even make an application, I think buying some DNS servers somewhere which is now a commodity and at the time when we started this wasn't, I don't think that the RIPE NCC membership would support us subsidizing those new G TLDs, that was the question so. You know, if you think we should be doing this, you know, then make it clear in the services Working Group, but I don't think you have much of be chance.

AUDIENCE SPEAKER: I had the same understanding but I was just surprised about the number of ??

DANIEL KARRENBERG: Okay, let me ?? give me 20 seconds more.
These are all very small, not all, but most of them are very small ccTLDs and they are all ccTLDs, and we have, over the last decade or so, pushed out the bigger ones, including yours actually, so initially we were actually doing .DNL secondaries as well and because we got this push back from the membership, we went to the larger and more self sustaining ones and said why don't you move out? We love you, but it's not really for the RIPE NCC membership. So, the 77, I am quite sure you can find the list somewhere, we could provide a list to the Working Group if that was interesting, but they are not really big ones or self sustainable ones among those any more. They are really the smaller ones.

CHAIR: Olaf?

AUDIENCE SPEAKER: Olaf Kolkman. One of the statistics that I am actually interested in and I obtained those from the RIPE NCC a few months ago is the amount of DNS key queries to the route service. There a request to public those in the next three reports because they give an indication of the amount of deployment of DNSSEC and validating resolves in the world because most of them will query for DNS key records if they do validation.

Just to give the summary of what I have seen there. It's a noisy plot. It's very easy to claim growth, but it's also very easy to not claim any growth in that plot. So, it is actually interesting to show that and see if there is any developments there.

SPEAKER: Thank you very much, we'll take that on board and we will provide these details at the next update.

CHAIR: Okay. Thank you very much Robert.
(Applause) okay, next up we have bending minute from the Slovenian ccTLDs registry who is going to give us an update in what's been happening in Slovenian specifically with their plans for rolling out DNSSEC for the TLDs.

BENJAMINI ZWITTNIG: I work ARNES. Last year we were busy implementing DNSSEC and I will explain how ?? what we had done before we implemented and how we implemented DNSSEC and what they still needed to be done.

There were very low interest for DNSSEC, but we decided to implement it since, at the moment, because we would not be in the position to implement when the panic would emerge for one or another reason.

We started our DNSSEC implementation on our recursive networks, we were on recursive networks for our community since we are since we were an north research network, we also provide recursive network for schools and universities and that kind of organisation, and what we have learned from the turning the DNSSEC validation, some stopped to be a result, but that was expectable. No reporting ?? nobody reported any problems with CPs, although we expected some. There was a slight increase of traffic and a slight increase of load on the nameservers. Around 40 percent of queries DO bit site and reply lengths, still remain around and below 512 bites.

How we prepared for the DNSSEC signing. First we started a test bed with two nameservers, one with the signed SI zone and the other is, was recursive with Trust Anchor to that nameserver. We also had to write a lot of documentation, and do a lot of testing, software, hardware, RIRs, different types of RIRs, not just key roll?overs but also software rollovers and rollovers of ?? we prepared also backup application with the same hardware and software set up.

The most important thing, I believe, is to monitor the DNSSEC implementation so we adopted our monitoring service, monitoring to be able to monitor also the DNSSEC, we have some sanity checks before we published the signed zone. Sanity checked our data and after the data is published, we check the nameservers and the quality of of data on those nameservers our chain of trust, etc.. we use also the RIPE's DNSMON for the monitoring and for traffic monitoring we use DSC and CACTI.

We are assigning .si with Open DNSSEC and we use sun oracle, Sun ZSKs and Soft H for zone signing case, and all and some other details about the keys.

On the 3rd November, we signed the .si zone and on 23rd December DS records were published into the root zone.

But after? No problems, still, sun still rises. There are some increase of traffic, but that was expected and also increase of DS lookups, which was also expected.

What after? Still, there is still interest in DNSSEC in Slovenia, except two registrar who are plaguing and they are willing to sign some their domains. There is no interest from Government institutions and also no interest from the banks and other financial institutions. At the moment, we have around 20 delegations signed, mostly because we need to still to adapt our registers to support DNSSEC. We would also like to sign zones we are operating as a registrar, primarily euro zones and we would like to get the opportunity to our customers to sign their zones, so the zones we are hosting for them.

We also need to raise some DNSSEC awareness amongst the DNS community in Slovenia, especially resolver prayers on DNSSEC validation on the nameservers. Some of them have already done this, but some of them, we have got a reply that they won't implement this since some of the domains won't be resolver ?? they don't want to cope with angry users because of this.

And we would like to also to encourage hosting providers to sign their zones, and registrars to adapt their registers ?? their registrars assistance to support DS records and to be able to submit DS records to the registry. We would like, also, to organise trainingings and some workshops to encourage people to be in touch with DNSSEC, and maybe in a year or two, to offer a signing service like a writ or NomiNet is offering to, are offering to their customers.

That's more or less...

CHAIR: Are there any questions?

Well, I have one, if nobody else has any. My question is about your management of the key signing key and the key depositary, do you have any kind of shared scheme for that that's done for the roots or do you have any other mechanism for recovering of the key signing key?

BENJAMINI ZWITTNIG: The key signing key is stored in HSM and we backed up the HSM during the key generation ceremony, we regenerated keys for five years and at that time we also did a backup and restored it in the backup location.

CHAIR: Okay. Thank you bending minute. Another one

AUDIENCE SPEAKER: Peter Koch, DE?NIC. You are also, I don't know, a happy customer of these nice SEA 6,000 from that company formerly known as Sun. Do you have any suggestion plan for this perfect piece of hardware?

BENJAMINI ZWITTNIG: Not really. We are using this ?? we started to play with this card and we found it useful but we will probably be replace this hardware in the future. We haven't got any role problems with them at the moment, but, yeah, we use them only for the KSKs.

AUDIENCE SPEAKER: Maybe there are people in the room who have the same nice piece of hardware in production and would like to gather to find some criteria for replacements and so on and so forth, because yeah, the hardware works, but there is other parts that don't work so perfectly I guess. Thank you.

CHAIR: Another comment?

AUDIENCE SPEAKER: Alex Mar. I was just following on to Peter's comment. We have essentially the same problem. We started with the sound cards and find out in the mid?they are not just hiking the price but discontinuing it because surprise, surprise, it can be used in non?Sun hardware and we are now using cards from Talis, so, they are more expensive a little bit but they have a stable support chain.

CHAIR: Thank you.

AUDIENCE SPEAKER: Lars Liman from NetNod. I was curious about, what's the process if a domain holder in a Slovenian top?level domain wants to have his zone signed or connected to the parent zone, what's the process for putting the DS record into the Slovenian top?level domain?

BENJAMINI ZWITTNIG: At the moment this is done manually only.

AUDIENCE SPEAKER: Okay. So, he contacts you. They don't have to go through a registrar zone, okay. Thanks.

CHAIR: No more questions? Thank you very much Ben.

The next speaker it William Toorop from NLnet Labs and as we know NLnet Labs has been doing work with DNSSEC pieces of software and with us is the two who developed a tool called DNSSEXY.

WILLEM TOOROP: So, the DNSSEXY is a software programme that prevents bogus zones from getting out in the wild and the idea originated from RIPE, actually, about a year ago there had been some zones that were broken and were served in the short for way to prevent it and that's how we came up for a proxy server.

So, there are quite a few very available. Below there is a list which I have taken from the minimizer website. But, they either verify a zone file and then you need access to the zone file. If you can do that, if you can access the zone file before the signed zone file, before it's being served, then you are good, then you don't need DNSSEXY. But if you have a priority signer or some other hardware provisioning system, that might not be possible and the other checkers, they check zones online or do online checks, signature checking. So, they test zones after they are published, which is, if you try to prevent bad zones from being published.

So, DNSSEXY is made to prevent that and it's just another nameserver that accesses a slave server from the hidden master but acts as a master for the public slaves. When it is notified that there is an update and it transfers the zone, it does not yet notify the public slaves, but first responds a versus fire, feeds the zone and also serves the zone to the verifier, so both those verifying software can be used, one that checks the zones and one that does queries. And we are in the process of accessing that successfully, it says all is good, notify the public slaves, they transfer the zone and it's out.

So, all this software can be used before the zone is actually published. One could ?? I noticed that, for example, the AfriNIC zone check has an online service, one could utilise that from a verifier to check the zone before publishing it.

We have chosen to use NSD to implement DNSSEXY instead of building another nameserver, because, you know, you don't really need a nameserver because ?? oh well, I'll come to that later ?? but it's proven technology. It's very flexible in the way you can specify how notifies and transfers are done, but most of all, the way NSD works is very ?? it fits DNSSEXY operation very well. I think it's nice to demonstrate how NSD works. Those shapes are demons that NSD has forked off. First it starts and it reads the database, the zone files, if it has them. It starts a transfer demon and then it starts two children that actually do serve the zone and it does it by forking, and if you fork a process, then it copies the memory of the parent process ?? it doesn't copy it, but it has ?? the memory available and it does copy it if you change the memory, it copies the parts you ride through.

So when a notifier is received by a child process because that's the one that's listening, it informs the transfer deem on. The transfer demon reads the zone and an append a file to the disc. Then the main process forks off a reload process that has this copyright memory version of the database. It merges the changes in the database, and then forks of the children which then have the copy write memory version of the new database and kills the parent. And starts serving the new zone. Now, at this moment, we are ?? the zone is just ?? the reload process is forked and the zone is just merged, but there are no children forked yet. It's very suitable to start a verifying process and start serving that zone so it can be assessed.

There is also a disadvantage of NSD in this setup, because NSD has no support for incremental transfers and the public slaves, they transfer the complete zone when notified.

This is how it is configured. You have in NSD the service section and there is one or more zone sections. The verifier to be spun is specified with the verifier option in the zone section. The bot zone is to be verified is available to the verified process by the verify zone environment goal. There are some options that control how verifiers should be spun. For example, if you have multiple zones and you have a multi?core machine, you can verify multiple zones concurrently. Verifying should not take forever, so you can set a timeout in the service section for all verifiers, but in the zone section, you can defaults to inheriting the failure from the service section override the zone.

Those are the IP addresses that are available to a verifying process for assessing the zone, for doing queries, the online queries. There is a default port if you don't use it. I chose the number because if you turn the numbers a bit, then it resembles the word sexy. The zone can also be fed to the standard input of the verifier. Other settings are available in those environment for the variables on the bottom.

So, what happens if an update is received by a DNSSEXY nameserver? It's reversed as being notified, then when it verifies, it's put in a table, it says the good zone is number 10, this zone, and then it notifies the public slave.

Now, when the hidden master has a bad sound that doesn't verify, then it remembers the serial number that didn't verify, so that when the zone needs a refresh, it wouldn't transfer anyway because there are nameservers not only transfer when they are notified, but also when the refresh time expires or the re?try time. If the zone expires, then DNSSEXY will not serve the zone any more, but it's probably a broken NSD, it still provides transfer for the zone. I think this is kind of interesting, because then the zone is still available and we know it is good.

I have just been notified that I have two minutes left. You can ?? it may be that a zone might be failed in the future and there is a way to override that behaviour that a serial is not transferred, and that is by notifying. Notified zones are always transferred.

The code base has just been reviewed within NLnet Labs and I have to merge the review or parts of it. So the beta release candidate will follow shortly. Those are the mailing lists and websites. You could also try it from...

This I am going to skip. This is an interesting idea, if I have two minutes left, I would like to present it.

So, this is a future improvement that's not yet in DNSSEXY, but more than ?? or several people have come up with it and I think it's interesting to show.

It's also to overcome this lack of incremental transfer, it is with NSD, also, very possible to notify independently from the transfer, so you are you are notified by a different server than the server you transferred from, which would allow a DNSSEXY proxy to notify the public slave and then it transfers from the original master. However, what happens when it refreshes or when it retries? Then it should not transfer, simply transfer from the master because it might transfer a bad zone. So, we could ?? if your public slave is NSD as well, we could provide option in NSD that says only transfer those zones that have a serial that is notified by the DNSSEXY slave. But then we have a boot strap for them because what happens when you just start a public slave? In that case maybe you wish to transfer from the proxy as well

This I mentioned earlier. Also, note that DNSSEXY serves the zone to the verifiers and the zone is transferred to, or is fed to the verifier, but it doesn't need to do that. It could also just query the zone from the master, the verifiers could query the zone from the master and maybe read the complete zone by transfer. So it might be interesting to have that environment variable. It might also be interesting to have an incremental zone feed. That's also interesting. DNSSEXY could serve the zones to the public as well, but it might also be interesting if it could verify a zone that it just loaded itself.

So, that's it. There are some, I think the creation of DNSSEXY also races some interesting questions in what it means to be authoritative and what it means to transfer, and so I have these questions for you, which you may answer me later and maybe you have questions for me.

CHAIR: Any questions?

AUDIENCE SPEAKER: Jacob Schlyter. As we have just killed off our auditor and it will be no longer part of open DNSSEC, I think this would be quite interesting. My question is is this going to be part of NSD or is it a fork or are there plans of actually reintegrating later and this takes us back into DNSSEXY?

WILLEM TOOROP: Yes, the fixes of NSD will be integrated into DNSSEXY. That's a fork.

AUDIENCE SPEAKER: It will be a separate package?

WILLEM TOOROP: Yes, yes.

CHAIR: Okay. Thank you very much.

(Applause)

Okay, continue on with the theme with DNSSEC for this morning's session, the next speaker in Patrik Falstrom from IIS who is going to give us an update on quality measurements from DNSSEC in Sweden.

PATRIK FALSTROM: Good morning. First, a little bit about our health check project. Our health check is a project where we try to analyse the quality of the Internet in general in Sweden. We first started with DNSSEC and DNS as it is our core business, but we have expanded the project to also include that work measurements, for example for net neutrality where we have probes, located to different operators where we do a lot of funny stuff.

The health check service is a report published once a year where we publish all our findings about the general quality of the DNS in Sweden, and it's a report focused on explaining the DNS issues that we find and try to explain what the proper behaviour should be.

For this purpose, we also built a health check measurement system where we have included DNS check as the core component, but we also scan all the web pages associated with all the .se domains. We do, also, some e?mail checks. Part of the health check system is also an analysis part which we can see a little bit about in the picture just showing off the, the know, most notable measurements as a dashboard thing.

Now in December, we had the campaign, the general campaign where we lowered the prices of all the .se domains, but also as a part that have we had the DNSSEC campaign where we, even more subsidised the prices for DNSSEC signed domains. So that explains the fast growth of DNSSEC in the graph below this this slide. The first spike is actually one DNS operators who started the sign, but we discovered a lot of issues with the signed zones so they quickly disabled them. So, I will talk a little bit about that process.

First a little bit about the market situations. We have a total of 130 registrars where the three largest account for 50% of all the domain names. So it's a really concentrated market and for the nameserver operators the two largest have 36 percent of the market. So, when we do DNSSEC campaigns we of course target the largest operators, but we try to help all of our registrars equally of course.

So, during this campaign, we decided to help them with DNSSEC as much as possible. We started to shake check all the new signed zones as much as possible. He does also do a special reports targeted for registrars, like, regular e?mails that are sent out.

And we also have a new DNSSEC error function, because when you have a lot of signed domains, it is an issue with transfers, so, when you transfer a domain between the registrars and they also move to a new DNS operators, they often forget to remove the DS record from the registries. So we do reports on the failures on the DNSSEC and send it to our customer service so that it can notify the registrars if there is any notable errors.

We also have several internal monitoring tools that we are looking at to further help our registrars. And we have also published a report on DNS with DNSSEC recently.

So, during this campaign, I developed a tool for quickly analysing the quality of the DNSSEC signed domains. The tool is divided in different parts. They collect thing is a script that takes a list of a domains or a domain name and just gathers all the DNS, creates all the DNS queries for gathering the DNS information and stores it on disc. And we have to analyse the thing where we can analyse all the results. So that's an open source project as well. So there is an example of how we just do a quick analysis for the R codes on the domains, that you feed in this direction.

So the analyse part of this project is a lot of tools to just quickly find out which nameservers are having errors, the current amount of serve fails and a lot of things that are included this this presentation, so I will talk more about that now.

So, in March now, we published this report on DNSSEC quality where we also tried to explain all the issues in a lighter format than RFC 46. 46.1 is not a document that most DNS operators in Sweden will read. They will just want to have a light explanation of what you should do. So we try to explain everything that we find and what you should think about in a more consumer friendly format.

Some results from the report. We analysed the previous health check service, and then we compared the results from that and the last one was in October. So, the way the health check survey works is that we have a smaller group of domains that we consider important, for example, Government domains, and large media sites and stuff like that, so that's in total 912 domains, and that quality improved a little bit, but the general control group, which is 1% of the domain names, has increased the number of errors in DNS.

So, now it's the rest of the presentation will be about the DNSSEC errors we found, or the DNSSEC information. So my tool asks 7 questions to two different local resolvers, one does DNSSEC validation and one does not. The five first records, the A, DNS key, MX and so on, it does validation on it, so that the result they get won't be on domains that are generating surveys. So the current amount of surveys is, like, 6% of the total number of DNSSEC domains, which is a lot.

The reason why is that some DNS operators have a large number of domains that nobody cares about. So they won't fix it.

Signature life times is something that we look at and this is the inception time, so you have the number of domains on the Y axis and the inception time on the X axis so the oldest, this is signatories that are valid. All list signatures are more than 150 days old, and you can see that most of the signatures are only one day?old, which is good, because they are kind of fresh.

The expiration times is, most common they have two weeks valid signatures. Which is pretty good. But, a normal amount of domains have 0 days, so these are valid signatures but they are expiring any second, so... I haven't done this report over time, such as he'd Luas had done, but that is something that I will do as well. And there are a lot of domains that have signatures that were valid for a very long time, which you can see here, that's why I marked them as yellow. This is not normally a problem if you know what you're doing, but I don't expect that people know what they're doing, so...

Just as Edward Lewis, I found that most people are using DS digest types, both 1 and 2, so this is 50 /50 in the ASO.

There is nothing strange about the DNSSEC algorithms found. Most domains use RSA short 256 key, I say which is the default in most software I guess.

Key lengths are just as Edward Lewis found. 2 keys for the KSK and the 1 K keys for the ZSK. Here is the difference. So there are a lot of people who have chosen to one NSEC.

When you look at resolver performance or authoritative nameserver performance, you can see that the most common NSEC three iterations is 1, which means that it's probably low security zones, but ?? and they are using the default.

Shared keys is popular among the Swedish registrars, this means that you same KSK for a lot of zones. So this is the main one today I guess. There are some numbers about the number of DS KSK per domain. This is an interesting find, we looked at the SOA expire times but I also compared to RSIG expiration time. There is a RIPE document called the RIPE 203 which recommends 41 days for the SOA expire which is a reasonable default I guess. But if you compare this to RSIG expiration time, this graph gives an explanation. This is the difference between the RSIG expire and SOA expire, so, RFC 41 BIS recommends RR SIG expire to be higher than the SOA expire. The reason for this is to notify the nameserver operators early on that you don't have fresh data. Before that RSIG signatures expire.

So, in the ?? we have a lot of mismatch between those figures for .se sums, so that takes some explaining to do.

The length ?? the valid length of signatures are way too short. This is basically the same findings as he had Luas did yesterday. There are still a few domains using fight well with keys. We also find some DSA keys, which is unusual. I think we can skip the publication of DS records. The recommendation in the RFC from 2006 says you should publish both, but I think most resolver has support for both record types.

And the SOA expire lacks connection to the RSIG expiration time.

So what we want to do is do this over time so that you can see rollovers of keys and how often you refresh your signatures and also the salt values. But we also want to see the introduction rate of new algorithms, for example, ghost. We also want to look more at shared keys and combined signed keys. Most software doesn't have support for yet and and also TTLs, which you don't trust when you have a resolver so you have to have your own resolver do this. Maybe also write a new RIPE document just explaining what values you should have for your DNSSEC signing.

So, thank you for this. And the code is published and the report is published on these URLs.

CHAIR: Thank you Patrik, any questions?

AUDIENCE SPEAKER: Peter Koch, DeNIC. Thanks for that interesting report. One quick question is you mentioned the signature life times and I didn't really gather was it the KSK signatures over the DNS key or the DSK signatures over some random R set?

PATRICK WALLSTROM: It's an random so it's both. I found that it was always the same without exceptions so...

AUDIENCE SPEAKER: That's interesting. Second question: You mentioned there is a number of, I don't know, was it registrars or nameserver providers that have a large number of domains. So, all these statistics about averages and popular values are kind of biased or dominated a bit by single entities making decisions that affect the average more than average so to speak, have you thought about, well, metring that out, like, finding the clusters and then finding out how independent the decisions actually are? That's probably not an easy task, so I'm...

PATRIK WALLSTROM: It depends on how you actually do that. I think if you see the defaults used for the DNSSEC signer always, they don't change the values when they start to run DNSSEC.

PETER KOCH: That's true. That's one factor, so entities following the same default, the same software maybe. But should one of these entities say with 15,000 zones decide that they want to, I don't know what, 1295 bits ZSK, they will obviously influence this whole thing dramatically given that while you do have a significant deployment, still 15,000 zones would make a difference. The question is here is that how ?? we are looking a bit at this kind of swarm intelligence but this is more lemming mode, right, so and the difference is the lack of intelligence kind of, or the distributed intelligence, so I'm wondering, and that partly pre?empts a discussion that we are probably going to have with he'd he is presentation as well, how much are we actually judging what people are doing here or how much is everybody just following everybody else and waiting for some, like, dissenting entity to break this node.

CHAIR: Next?

AUDIENCE SPEAKER: Philip, I have a question from a remote participant, from DNS LU, and he wants to know whether you compared your statistics to any other zones like for example ??

PATRIK FALSTROM: I can do that quickly, so I will do that.

AUDIENCE SPEAKER: I'd like to stress the need for better recommendations here from, possibly from the RIPE DNS Working Group. What we have seen is that some software vendors, including ourselves, ship with not very good defaults, and people tend to either use the default values while signing or they use the values from their neighbour or friend, a lot of people have used the default values from the early .se assignings, and we know better now. So if we can take up this work in the Working Group, that would be good.

CHAIR: Jacob, did I hear you say you are going to volunteer that you are going to author that document?

AUDIENCE SPEAKER: I am not sure you heard that, maybe I heard that. I didn't say that, but from DNSSEC, we have del help providing better defaults and, yes, I will findly do that.

CHAIR: Thank you.

AUDIENCE SPEAKER: Can I squeeze two in? George Michaelson from APNIC. The first one is I think I heard you say on the slide 6 percent are looking dodgy, bad.

PATRIK FALSTROM: Though don't work.

GEORGE MICHAELSON: So this is clearly a statement no one is doing validation, because if one in 15 questions failed, would you not tolerate the situation. So...

PATRIK FALSTROM: All major ISPs in Sweden do validations.

GEORGE MICHAELSON: One in 15 lookups are failing ??

PATRIK FALSTROM: That's not statistics that works. Those are domains that are not used for any purpose.

GEORGE MICHAELSON: I understand right. So these are signed domains which are faulty but no one looks so there is no perception of a problem. Okay. Thank you that clarifies one thing.

Second thing. I am not an DNS expert, I am not a security expert, I try not to pretend to be one but I have a sense I have carried from many sessions discussing DNSSEC, that in our discussions of key length, everything goes up. We have this fear an para know an about signature life times and key lengths and protect the farm and so it gets big are and bigger and bigger and bigger, and I do really question why these numbers only go up and why no one says, you know, this is transitional security, we could have keys that were 512 bits long and within the lifetime of use of the key no one will break it and there is to down side. Is it just me or do we have to go to a future of 4,000 bit long ZSKs, I am not an expert.

OLAF KOLKMAN: In another forum, I have been been involved in trying to get recommendation out of the door. That document is currently in its second iteration, it's DNSSEC ?? it's RC 4641 BIS which is about to be last call in the in the IETF Working Group and what happened there is that we stepped away from, even further away from recommendation than the original document and have given much morally way into various modes of operations. I think getting recommendations out is a wise thing to do, but be careful for the recommendations target, because there are many and they're different. This is not an easy task. And it's very difficult to get consensus on it.

AUDIENCE SPEAKER: Peter Koch once again, wearing two hats, one is, yeah, being involved in this other forum that Olaf mentioned. The observation there was it is possible to get consensus on a trade off document or trade?off style documents. It's close to impossible to reach consensus on concrete figures for recommendations, and changing hats as the author of RIPE 203, I remember that we had that very problem at the time and RIPE 203, in the introduction, explicitly states that this document makes recommendations for small and stable zones, meaning little content, probably dub?dub?dub and the mail sever, and stable as in we are not changing the data there. Of course the stability would be stability as would be changed by the introduction of DNSSEC, but still. So it might be useful to have this kind of defaults there. I am just wondering what the target audience is so I am probably Echoing Olaf here. These small and stable zones are usually, these days, provided by hosting providers and, like, similar enterprises. So, I am not really sure whether these want the defaults and I am waiting for he'd to talk about compliance here. So, we definitely would have to come up with a sensible target audience before even thinking about writing this document.

PATRIK FALSTROM: When you author tools such as DNSSEC or zone check it checks for the same zone configuration. The RIPE document is maybe the only one that are short enough to point that and from a trustworthy source, so, for example, the Swedish DNS community can look at it and see that, oh DNSSEC is doing same stuff. The RFCs are too many and there are basically no recommendations for anything in there. So, those documents are very hard to point that when you want to explain why you are setting some validation as an error or warning. So, that's basically it.

PETER KOCH: I am painfully aware of that split here, which is why RFC 4641 BIS is at 50 plus pages for various reasons but one of these is explaining the trade offs between different choices and again this single and the CRISP nature of RIPE 203 has the down side in that it is easily misapplied to the wrong target, and ?? yeah, just waiving the warning flag again.

AUDIENCE SPEAKER: Randy Bush, IIJ. 15 years out, we built something with more knobs and switches than a 747. We crash regularly, right. The only people who can fly it are expert pilots and we don't have enough of them. The only reason I can conceive that we have this is that the pilots union wants to keep salaries high. This thing is fragile, etc., and broken and people are inventing more things to pile on it before it flies. Okay. Let's either get rid of some of the knobs, or have default settings that work, or document the damn thing in ways that normal general aviation people can fly it. Otherwise, it is a continuing disaster.

CHAIR: Thanks for that. No more questions? Thank you very much Patrik.

(Applause.)

The next speaker talking about DNSSEC and issues around fragmentation problems is role and from surf net. This is one of Randy's thing about pilot error, this goes to the heart of that too.

ROLAN van RIGSWIJK: Definitely. I would like to talk to you about dealing with hosts that don't get fragments, because that's an issue we ran into when we signed our zone.

Just a brief introduction. SURFnet research network in the Netherlands, we signed our main domain SURFnet.nl somewhere in 2010, and one of the things we noticed that we immediately ran into trouble once we published our DS in the.nl zone. One of the largest ISPs or rather the largest ISP in the Netherlands for those of you in the know, they have a green logo, had a big issue with our null signed domain because suddenly their customers couldn't resolve our zone any more. And that ended up being a problem for me because I had a queue of colleagues standing by my desk telling me that I had broken our zone and they couldn't work from home any more and it was all my fault. Which was a bit annoying, so we started researching what the problem was. And as it turned out, this ISP had an ancient fire wall, which was configured years ago, nobody dared touch it, and it was blocking UDP fragments at the service networks edge.

I'd like to show you a picture to make it a bit clearer. What you can see in the picture is our authoritative nameserver at the top, and the recursive caching nameserver at the ISP at the bottom and what they do is error number one they send a request to our authoritative nameserver, to our zone. Number 2 it replies, that goes onto the Internet and because we signed our zone, we now are serving much larger answers and some of them exceed the MTU and get fragmented in transit and then error number 3, the first fragment arrives at the fire wall, that goes through. Arrives at the caching nameserver, a little bit later fragment number 2 which is with error number 5, arrives, that gets blocked to the fire wall and the recursive nameserver never gets a proper answer.

This was a big problem because they had recently changed their resolver infrastructure, they had switched from using, I think they were using power S before they switched to unbound and unbound by default does EDNS 0, and it sets it to 1. Even though they weren't doing validation, they were getting DNSSEC answers from us and because some of them were so large they were getting fragmented, they only got some of them answered but most of them never arrived, for instance dub?dub?dub dot SURFnet.nl, the answer is about 1700 bites because of all the signature data in there, especially over additional records.

What we discovered while we were researching this issue is that we could actually detect it happening. And that's error number 6, because the recursive caching nameserver receives the first fragment, it starts waiting for the second fragment to arrive and if that doesn't arrive within a certain amount of time it will sent and ICMP message back across the line to us saying that fragment reassembly timed out. That's useful because that means we can now detect this problem and we wrote some code to do the detection and got very scared, because this ISP wasn't the only party that had this issue. So we believed this was a serious BIS because we were doing DNSSEC by the book, we were signing our zone. We hadn't done any tuning to give minimal answers from our authoritative nameservers but we believed many people will use the default settings so we decided to go for the default ones as well. Because we were a research network we are not afraid if there are some bumps in the roads. We know these guys at the ISP, we gave them a call and they changed their settings on the resolvers, problem solved. Then it turned out that they have a second set of resolvers, completely separate for their business ISP, which they use for their host environment, and somewhere in 2011 they upgraded this resolver infrastructure were Windows 2008 to 2008 release 2, which again does EDNS 0 and again, now we are getting complaints from companies we do BIS with that they weren't able to send us e?mails and they turned out to be customers of the business ISP. And because they couldn't resolve our MX records any more, they couldn't send us e?mail.

So, we are a research network, right, so we are able to deal with these issues, call the guys, get it solved, but we are asking enterprises, banks, governments to start deploying DNSSEC and if we have to tell them, well, a certain percentage of your customers will be unable to resolve your domain names or will be unable to reach you because of issues like this, then that is a problem. And also, I mean we talk to the ISP and we told them you should really consider changing this fire wall setting, blocking fragments is something we did in the nineties because we had a packet ping of death packets etc. It doesn't really seem to be a problem any more. They were unwilling to change this. They literally said it's almost Christmas. So...

What we did is we did a short student assignment to confirm the issue. I had a student do a four?week research project where he built a lab setup and was able to confirm that the fragment reassembly time that we get actually is a result of a large answer not getting through to the recursive caching nameserver. Currently, I have an MSC student working on ways to mitigate this problem at the side of the authoritative nameserver. And also, working on better detection of the problem because ICMP messages may be blocked on the fire wall as well.

So, to give you an idea how big this problem is, I put up some graphs on the slide. What you can see here in green is the percentage of querying hosts to our authoritative nameservers that have EDNSO enabled and it is around 60% of host that have it enabled.

And the second slide is the buffer size distribution. It may be a little bit hard to read but the important take away here is that the big purple blob is about 90% of hosts and they advertise a 4 K buffer site which is a default setting. This goes back to the discussions we had with the previous speakers. Users are not aware that there is a default setting. They are unwilling to change it so everybody uses the default. As you can see here, and only a small percentage seems to have had a look at what kind of response sizes they can actually accept and have changed their setting.

And the final one is the DNSSEC okay bit. Out of the 60% of hosts that do EDNSO, how many of them have it set to 1. As you can see on the host that contact two of our authoritative nameservers, about 65 percent of them have DNSSEC okay set to 1. The other two is a little bit lower and we believe that's because we have a SPAMHAUS secondary on that, on those two nameservers and the mail filtering guys know how to tune their resolvers a little bit better.

What we wanted to do is rather than complain about this problem we wanted to see if we could find a solution for it. So, we tried, or rather my student tried two approaches to mitigation. One of these was lowering the EDNSO buffer size object one of our authoritative nameservers. For SURFnet.nl we have four and on one of them we decided to lower the EDNSO buffer size to 1232, so even on IPv6 packet shouldn't get fragmented. And the second thing we tried was detect problem hosts dynamically and then adapt the behaviour of the nameserver to deal with these problem hosts, so only change the EDNSO buffer size for host that is we know have issues.

He also worked on real detection. Because as I said, ICMP messages may be blocked by the edge fire wall behind which the resolver sits, and how can we detect these hosts even if we can't see the ICMP messages? Well we decided to go more the heuristic approach and we have a set of rules that we apply to all the queries and answers that we see going to the authoritative nameserver. So, if we, for instance, see an ICMP frag re assembly time?out message, then we decide that this is a problem host and another thing that we see for instance is the EDNSO header being toggled on and off by the querying hosts which can also be a sign that they have problems, changing EDNSO buffer size in queries is something that BIND and unbound do when they are not getting responses from nameservers. And they show different behaviours, both of them do this though. And also we have seen one nameserver implementation that does a fallback to TCP even though it didn't get a truncated message. That also seems to indicate they are having some issues. The final one is number 3 and I left that for last, is excessive retries within the TTL of the record and that is of course controversial because there are many reasons why you can get queries from the same host within the time to leave of the record. But we set some thresholds and at least tried to detect problems by applying that rule.

Then we did two experiments. Experiment number 1 we lowered the EDNSO buffer size in one of our authoritative nameserver, 1232. Experiment 2 we selectively modify the advised EDNSO buffer size in queries originating from the problem hosts. My student wrote a piece of software from this which we will release in open source at the end of his thesis work, which you can put in front of an authoritative nameserver implementation like BIND or NSD and it modifies the incoming queries, so it modifies the EDNSO buffer size in the incoming query if it knows that a host is a problem host and we wanted to compare the results of these two experiments to see which one is better.

Now, these are some preliminary results from the work that my student has done. The top graph shows a distribution of problem hosts among the ditch heuristics. As you can see, about 8 percent of what we label at problem hosts are sending ICMP fragment re seamably time exceeded messages. So, case 2, which is EDNSO turning on and off is a little bit higher. It's a little over 20% of cases and of course the majority of cases are people sending excessive retries where we are uncertain whether this is caused by them not being able to receive fragments or there is another issue.

What's interesting to note is a significant number of hosts that we label as problem hosts, have at least two of these rules applying to them. So that's about 40% of host that is show up in more than one category. And some rough analysis shows that at least 2% of all the hosts sending queries to our authoritative nameservers have issues with fragmented packets. So that means that if you tell a bank to deploy DNSSEC, then you have to tell them that possibly 2% of their customers can't reach them. Which, for them, would be a big issue.

Some more graphs from the work that my student has done. ICMP fragment reassembly time, exceeded behaviour, there are six bars from left to right it's before the experiment, the second bar is first experiment, third bar is second experiment, and the last three bars are for IPv6. So the first half is IPv4, the other half is IPv6. And what you can see is that about 1% of all the hosts that we see send fragment re time exceeded packages. If we do the first experiment, so lower the EDNSO buffer size zone on one of the authoritative nameserver and we do a measurement on that specific nameserver, then actually the fragment re assembly time?out goes away completely which makes sense because there should be no more fragmentation. In the second experiment you see there are still hosts sending them and that is because we only mark problem hosts for a certain amount of time, so we put them in a list for two hours, we send them adapted answers for that time and then they get kicked of off the list and they have to show up as a problem host again before they get onto the list, so that's why you still see FRTE messages.

The second graph those the number of FRTE messages per host and what you can see is that in the 0 situation, it is about six FRTE messages per problem host that we see. The second experiment it's only one. But that's a bit of a byte because there was one host in among Ola that had a very small MTU and that still had problems with our 1232 setting. The third bar for IPv4 is shows you that the number of messages per host drops if we do the second experiment. So, what we believe actually that our approach of selectively changing the buffer size works, because the number of FRTE messages goes down significantly.

We also see some side effects, because we modify the buffer size on the authoritative nameserver, we get more truncated messages. And if we do the selective modification, we go down even further than 1232. If a host keeps showing up as a problem, we lower the acceptable EDNSO buffer size with steps of I think it's about 16 bites ever time, so, in the end, they may end up having an EDNSO buffer setting of 512 which will lead to truncated messages. It didn't result in a significant increase in TCP fallback. It was such a small number of host that is it wasn't really a problem.

So to conclude: Because there is a significant number of hosts that have this issue, over 2%, we believe that is as serious issue if you operate an DNSSEC zone. It may not be an /SH issue if you have a top?level domain and you do a lot of tuning. You would reef out authoritative information, additional information, tune your key sizes so you would have responses that fit within the 512 bites or at least below the MTU size. But people like us who just take nameserver solve wear and DNSSEC software off?the?shelf, start signing your zone, you may have an issue, because you have a large authority set in there and you don't tune your responses because hey, only DNS experts will tune their responses. Other people will just pick software off?the?shelf and use the default settings.

Fortunately there are ways to ameliorate the problem. I think our first experiment shows that it's very viable if you have a large enough NS set to change the EDNSO buffer size advertised by one of your authoritative nameservers. Effectively that means that hosts that have issues will still be able to resolve names within your domain. And we are considering writing a best practice paper or getting in RIPE 2 or 3 or some RFC thingy to get this information out there to make sure that maybe the software makers adopt their software or people that operate zones at least change their settings, or rather, people that run resolvers, make sure that they check what size packets they can accept before they set up their resolver and start doing DNSSEC, or even if they install software that does EDNSO despite them not doing DNSSEC validation.

My student is going to write a paper for IEEE magazines. That will have all the research details of what we have done. And please, please, check your fire wall settings if you start doing DNSSEC validation on your resolvers. It's such a basic problem.

So that's it. Any questions? Thank you.

AUDIENCE SPEAKER: Jacob Schlyter, airline pilot. What other services break before this? What other problems will they have before they encounter problems with say the bank?

ROLAN van RIGSWIJK: I'm not sure I get your question.

AUDIENCE SPEAKER: If this resolver operatorses have this problem today with their broken fire wall and what not, you say that about they will have failures about 2% for some of the ?? some of the customers will have 2% failure rate. But I mean, my argument is they have problems before they even start. When they resolve things at the route and do things, they will break before even hitting your troublesome ??

ROLAN van RIGSWIJK: I get it. A lot of ?? I mean, you are involved in a project, a lot of tuning was done for the ruse, which means that the answers for the root zone are much smaller, so they won't have issues resolving names in the root zone and they may not issues in resolving names in the TLD. The fact is that the ISP which first led us to investigate this issue only started having issues when they reached our domain name, and obviously progressively as names get longer, the problem gets bigger, because your answers get bigger because there is more data in them. And the roots and operators and the TLD operators will do tuning. It's unlikely that, especially if DNSSEC deployment takes up among or two of your zone owners, that they will do tuning. So, you, as a tier 2 domain owner will probably be the first zone where they encounter problems.

AUDIENCE SPEAKER: So we make sure a really popular domain is DNSSEC signed.

ROLAN van RIGSWIJK: Yes, something like Modzilla.org.

CHAIR: Could I ask the questioners to keep the questions fairly brief because we are running a lit bit late.

AUDIENCE SPEAKER: Ed Lewis. On your previous slide, your last bullet, I have debated making this comment public but I will. You said the right thing verbally, but what you wrote down is check your fire wall if you are doing validation. Check wall if you are doing EDNSO.

ROLAN van RIGSWIJK: When I said it I noticed a mistake in this line.

AUDIENCE SPEAKER: Lars Lyman, route server operators. I can tell that you we don't do adapt I have tuning for the root nameserver hitting us with 30,000 questions per second. If there is tuning it's inside the zone content and that's fine. I also wanted to comment that we talked about before doing recommendations for DNS operators. There ought to be recommendations for fire walls as well.

ROLAN van RIGSWIJK: Yeah, and I think there ought to be recommendations for software vendors as well, because realistically, now that most major nameserver distributions do EDNSO by default, I would argue that they might want to do a probe to a well known domain that can help you do DNS org response size tester for instance to adaptively change their EDNSO buffer size before they start up and send queries into the Internet.

CHAIR: Talking of software vendors.

AUDIENCE SPEAKER: Actually my comment is unrelated. There has been another recent introduction of different technology in the Internet called IPv6, and the failure rates that people were referring to in the introduction were a lot lower than what the failure rates being seen for these are. Yet, the attitude of these people words to solving the problem of production of the technology was quite different, you know. No one went about telling their customers and the Internet users in general that they are wrong and that they should fix stuff and that if they don't, I'll make you suffer and bang by having some demain that's very popular break in your face. Perhaps it's time to change people's attitudes, particularly the people in this room.

ROLAN van RIGSWIJK: That's a fair point but the problem is that Joe Internet user does not know all these technical details. He just wants the Internet to work and I think the first step is for us as technical people to make sure that the default settings don't break stuff, and that for DNSSEC, you could argue that that starts at the side of the recurs err because that's the larger group.

AUDIENCE SPEAKER: I agree completely, Joe user shouldn't come into the questions, they want to navigate and communicate. That's what the Internet is for. In IPv6 you don't talk to the end user you talk to the ISPs.

ROLAN van RIGSWIJK: This problem is much worse for IPv6 because IPv6 fire walls don't do fragment reassembly period, because fragmentation occurs between most end points of a connection and there should be no fragmentation or reassembly in between.

CHAIR: Thank you very much row land.

CHAIR: It wouldn't be the DNS Working Group without running over time. And we are running over time, as usual. So, I have got about ten minutes left for follow?up discussions on the two Plenary presentations that were DNS related. I know there was some discussion after Joseph's discussion on route objects, authentication with secure DNS. I just wonder if there is anybody here that they want to add to that discussion, other than what was done before. Has anybody got any questions for Joseph, any other comments or not? No. I guess not. He'd has got a few slides he would like to run through to elaborate on a couple of points related to his talk yesterday about how ccTLDs are deploying DNSSEC and also relates to what Patrik had been discussing earlier. We are into the coffee break now. So be brief. He is just going to give us a couple of minutes of slides and we'll take any questions or comments from the floor.

EDWARD LEWIS: Well when I gave the Plenary talk I made it more generic and I wanted to add two slides to that given that we have a short amount of time. The reason why I did the work was to come up with what are the route guys do and I didn't actually say that in the Plenary. I figured it would be lost in the crowd. This is what ?? this is not average, but this is the most common settings for these different parameters up there. I'll let you read through them. They are in the archives. It's a good question whether or not this is a good idea to follow this. I'll just make one statement now. We can talk about compliance later. If you do this it may not be the best for your zone but it will work. That's the best I can say about these numbers.

Now, the reason why I came here to do the presentation, was because in doing some of the work I look at what's normal and then what are the outliers. And outliers don't mean that they are wrong but they made choice that is didn't fit the pattern, which is meaning they are now following the crowd which is what we don't want to do. But I found some places where I'd like to talk to the operators about because I don't want to name names, I am not going to say they did this or that. But I'd like to, there are a couple of things I have seen that I would like to say if you do this, I really want to talk to you. Because I think there are some things that really shouldn't be done that way on the Internet and I'd like to understand why the choice was made or maybe it's a software bug that's not known and so on. So I have a list of few things for example, expiration time. Someone bushes a mall expiring at the same time, all year long. No matter when they signed, it the piece of data, the expiration time is always the same, the inception time changes but the ?? and that's not desired. It's not working. They went out for four or five days around a certain time last year with no signatures because they didn't notice it maybe.

Salt changes to me are a big deal because they will incur a large zone transfer back in traffic. Some people change it every single day, some never change it at all. So, we want to come up ?? this is a good area ?? which are the salt setting be, if you read this back it's not good on that topic.

No DS registered in the IANA for a very long time. A lot of zones when you signed, you don't set the DS record in right away. Average is about three weeks but there are some zones out there that you have never made the last step for more than a year now and we wonder do you need help? Did it get hung somewhere? Did an engineer leave? Do you need to get forward?

One this I find interesting, the signature duration is actually flap in some cases where they are using for man one signer and sometimes it's one week, sometimes it's four weeks and it goes back and forth. Not jitter, this is a very distinct change between the lengths.

And also anyone who runs a zone they wanted to know what I see about you, I can give you all sorts of information. Not that it's going to tell you much that you don't know but you might want to see what is coming out. Also it let's me know if I am actually seeing something real. I may something weird about a zone, you might say that's an artifact of my experiment. That's what I wanted to add to the slides to this crowd as opposed to the Plenary. Now, there are a lot of questions, topics, and we are going to that.

CHAIR: Any questions, comments, for Ed? Nothing?

EDWARD LEWIS: I'll add one thing. About the compliance, the clients word came up earlier and the idea about BCP. I didn't participate in the mike comments then because I might be up here. There is a need to have a BCP. I know it's hard to come to an agreement to what a number should be. But we need to set up a document which says if you a non DS person are paying for DNSSEC what do you expect out of the operators? That would really help. That helps me because last year I didn't travel much I stayed in the office, looked at RFPs and people said do you comply with all these RFCs and I said do how do you comply with an RFC that doesn't have a requirement. It would help everybody if we could say here's what DNSSEC is, here is what you are going to pay for. Not how are you going to operate. But what you are paying for. That may change what we look at in saying here is what you buying DNSSEC from one of the operators in this room. That's the kind of tone I'd like to see in the document.

CHAIR: Something I have been doing fairly recently is on the other side of that, helping people to put in bids for G TLDs and part of the ICANN process is to speak to your capabilities, your plans for your TLD to do DNSSEC. But, these people have no idea at all about what DNSSEC even is, let alone be able to give you any kind of meaningful discussion about choices of algorithms, key lengths, salt rotations, all that kind of stuff. They do not have any clue about what kind of things need to be said or to be asked of the DNS providers.

EDWARD LEWIS: They have a laundry list of RFCs, do you comply with this? And going through some of these, you can't comply with some these. It's impossible to comply are FC 4641 because it is, it's a discussion, it's a good document but it's self contradictory at times. Like do this, and do this, but you can't do both. If someone asks me are you complying with this document? We either lie and say yes, or you have to tell the truth and say networks then you look really bad. So we want somewhere saying yes, I do exactly what you want, we need that kind of a document. That's what's needed. So we don't lie to our customers about what we are actually doing. You can explain your ways out of this. Using the terms we comply in spirit, we don't not comply. But it doesn't really work much. That's the truth. You may not be non compliant, but you can't say you comply because you don't have key bits, key lengths that are within 100 bits of each other, and so on. So...

CHAIR: Okay. Then. I guess we are done. Thanks to all the speakers. Thanks to the folks that did the audio and looking into the Jabber room, the scribing and to the nice lady that's been doing the stenography. So we'll take a coffee break and see you back at eleven o'clock. Thank you.

(Coffee break)