Archives

These are unedited transcripts and may contain errors.

15th of October, 2013, at 2p.m.:
Plenary session

MIKE HUGHES: Welcome back to the afternoon session. If you can settle down, if you want to continue your conversations you can do that outside with some nice coffee. Otherwise, please take a seat and settle down and welcome to this afternoon's session. I'd like to introduce our first speaker, and our first speaker is Alexander Azimov from Highload Lab. He is going to talk to you about route policy verification, so if you are ready.

ALEXANDER AZIMOV: Good afternoon. I am head of at Highload Lab, I am going to introduce you my report about quality of data in route registries. I have made an introduction, as you can see I have these pieces of paper. I am going to have a little reading, and I hope it would help us to ?? help us understanding.

But don't forget if I can't get contact with your eyes, big brother is watching you. So, that is the plan.

First let's discuss why we need route policy data. For this purpose I have made classification. First class is traffic generators. In this class systems with greater outbound traffic I included. This is a number of examples. The simplest one is hosting companies. In the case of the traffic engineering problem is reduced to outbound balancing. In contrast, there are systems which are inbound traffic greatly exceeds outbound traffic. It is obvious fact that this classification is ?? but any cases where inbound traffic is significant we have knowledge about traffic flow to our network and where is the problem?

The problem is the traffic flow is asymmetric and the knowledge is deeply connected with knowledge of priorities between and relations between autonomous systems. Community policies are commonly known as route policy.

So, we have answered to the first question, we need route policy data to have opportunity for traffic flow prediction. But what is wrong with data from registries? There are common two features of this data. First, it is very outdated and incomplete. And second, it have a number of errors, and without any verification there is a great opportunity for cheating. Even more than if, we begin to analyse the data set, data set we would find out that only a few of BGP attributes are covered by route policy. As a result, we had a very suspicious feeling about ability of these policies and data, and is there any opportunity for verification. We have made a research and as a result we can say yes, it is possible, but not quite simple.

We use several ?? during this work from imitation model to several methods of active verification. As a result we achieved opportunity to detect route priorities at every level of BGP decision process. Let's take a brief look at ?? of what we have done.

For our AS relations tagging we use common algorithm with relations ?? where relations attacked using a set of system paths. In this example we have primitive situation. With knowledge of peering relations between Autonomous Systems 3 and 4 it is quite simple to achieve other relations between Autonomous Systems.

So, but the use of system relations in tag isn't sufficient. It could be used for detection ?? couldn't be used for detection of priority levels. Low levels of BGP decision process are not covered. To solve this problem we use verification. The most easiest way for verification is just to make ?? from remote node, you can just use your Atlas probes or something else. You would collect one path from one remote. But if you would make a request with record route option and spoof your own address space would you get a number of AS paths, where a number ?? of AS paths equals to the number of neighbour Autonomous Systems. All our data we have made available on our website. our portal includes history of relations for each autonomous system. We also give opportunity for traffic flow prediction from one providers to single AS. And give it as on?line service. You just need to move a slider.

On other hand, we have monitor of security issues such as BGP dynamic loops, some other configuration errors that make your network work as service amplifier. We also cover active BotNets. This data is collected from our creator. So with help of collected data we have made verification of route policy from registries.

And also, verification we used only one type of system relations, we use customers to provide relation because it has global visibility and on?line peering and we could guarantee that pref value for customer is greater than not for customer. First we analysed the completeness of the data. As you can see, RIPE is better than other registries and covers about 60% of customer to provide the links, only 60%. We also produce little bit deeper analysis, RIPE covers about 15,000 of customer provider relations and only half of them have information about pref value.

We decided to analyse the errors of pref values, the results also nice. About 70% of the data was not the same as reality. The results are I think interesting. I can predict questions about our model, the precision of our model is about 90%, even if there are errors these won't have significant effect on the situation. As a result, we can say that route policy data couldn't be used as a source data for traffic flow prediction. Modern, I can say that maybe it would be better just to delete all data from the registries because there doesn't give information, they make only opportunity for cheating.

On the other hand, mathematical models, that was proved by our work, could be used for route policy recovery with high precision. So I think I was very speedy, sorry. Thank you.
(Applause)

CHAIR: Any questions for Alexander? Can I ask a question myself. If you go to the slide with a graph that we had with ASes, there was the third or fourth slide. There are some ASes in real life there are some more complex category, for example, a single AS may have two disjoined sides or there are some with Anycast space. Is it something that you foresee in this analysis, is it something that you are interested in or is it something ??
A. Thank you for the question. Yes, of course, because as I said, this is the simple situation at one hand and at another hand, in relation to tagging is only the first part of, that ?? recovery, after that we have active verification, we could distinguish the priorities that allow them policy and also priorities such as ?? peerings and so on, and also have opportunity to detect that Autonomous System is not ?? it is not one system but it has one number, but at the same time it is parted and parted in the separate ought on news system that you have different policies.

CHAIR: OK. I see. Thank you. Any other questions? No. OK. We can move to the ?? to our next speaker, which is...
(Applause)

... our next speaker is Guillaume Valadon from France network and information security agency. And the presentation is about the French Internet Resilience Observatory.

GUILLAUME VALADON: So, I am happy to be here today, and today with my colleagues François we will present related to French Internet, it is not only us, the French national information security, also AFNIC, people doing ?? managing the data in France and French network operators.

So quickly, what is the observatory: We started this initiative almost three years ago and at that time we had some issues. The first one, we sought and are still seeking that the Internet is misunderstood. What I mean, we will be able to find people knows WDM, BGP, it's quite unique to find someone who can explain and understand all of these technologies all together. That is our first goal, expand the Internet. Also, from time to time in the news we find incidents relating to BGP DNS and so on, and the analysis of these are rarely to ?? we don't know the effect, for example, of BGP hijacks on our operators.

Finally, networking, like in many of the ?? we have BCPs but we don't know if they are used. So we started this observatory with some objectives in mind and first to study the French Internet, or Internet in France, more or less the same terms for us. We wanted to develop technical interaction with the network community and between network actors, wanted also at that time to publish anonymised results into reports, so far we published one one year ago and one two months ago and also we wanted to publish recommendation and best practices. So you can go on this website at the bottom of the slide, you will be able to find ?? sorry, that is only in French today, two reports and the BCP document, and if you put that into Google Translate you will be able to understand most of what we did.

So you know a little bit more about the observatory. The next question is what is residence, you will be able to find definition, if you ask the people you will get another answer. If you ask the same question to people doing biology you will get the answer. The definition is from white paper on defence and national security and find resilience as ability to respond to measure of crisis and restore normal service. For the Internet, major crisis could be anything from a bug in a router to attribute 99 to a DDoS like it was presented this morning. So far at least in France the Internet was often considered studied as a ?? industry, we have colleagues trying to look on the Internet, by trying to study its dependency on electricity or water, for example, and to enter studies into Internet residence, are they close to river or zone where earthquakes can happen, even we have earthquakes in south of France. What you try to do is study the Internet from technical point of view only.

So, again, where are we, we are behind observatory, is under the supervision of ANSII which is French information and network security agency. We have many missions, our main mission is to defend French information system and one of our priority of action, it's of course Internet resilience. So we are not alone on this boat. AFNIC is collating the project since the beginning and we have two colleagues, who are involved in this project. AFNIC, I don't know if you know them, I will just be brief; they are the French registry for the French zone and other observatories and we are also working on the software territory what we call French network operators like ISPs, people managing exchange points, transit providers and service providers so many people doing some kind of different job.

So what can be observed? Here, only listed two possible direction, I think we might be able, during the question and then during drinks, to find more and more direction but first we saw we can try to look at services, HTTPS usage, algorithm we use and the number of spam, for example, like was presented today, that can be discovered.

We did the first two years was to study the French Internet, the Internet from the Internet structure point of view using BGP and DNS protocols so the naming services.

Next question: How can we do that? We decided to define what we call technical indicators, we have seven of them for BGP ranging from study of objects to the number of hijacks targeting French operators. We have five for DNS, for example, we are trying to look at the deployment of the DNSSEC or IPv6. The important part here that in each report we define the technical indicator the same way and describe why it's important for the resilience and robustness, we present methodology, you get the same results. Because this work is not perfect, we describe the limitations, we have some and we try to address them on daily basis and we provide an analysis based on the data we observe.

So we skip to BGP and DNS indicators in detail and all of the work we present today was done in 2012.

FRANÇOIS CONTAT: Good afternoon everyone. I will now present you our work regarding the border gateway protocol. As all of you know, the BGP is the only protocol that is used to interconnect actors in the Internet. By that if you want to be part of the Internet, you need an AS number and then to interconnect with BGP, and also BGP lacks a lot of security so that is why we decided to analyse it.

So, in order to analyse the BGP, we need a feed, we need data source. The first data source we extracted was from the Whois project, from the ?? which updates we extracted prefix, AS paths, etc. And we built some indicators with it. Also, for example, we did some work regarding the hijack classification and then after some hijacks we used Whois database by extracting the route objects to validate or not this, so we have a ?? we have some tools, we have BGP update, we have database containing declarations so we can go a little bit further, so we have feeds now, what can we work on.

We studied the French Internet, so the first step is to identify what builds the French Internet. So we could use ?? we could have used existing databases, but after having a look into these databases we some ISCs were missing, some French well?known ISCs were not present so we decided to build our own algorithm to build a database of French operators. The results coming from that, we came with 1270 French detected. We miss 9 ISCs but we found between 70 to ?? 40 to 70 more than in RIPE database. So now we have identified the French Internet, what could we do? The first step was to think about ?? the Internet, I know there is a lot of mapping of Internet projects but we wanted to have an idea of the ecosystem of the French Internet and to see if we could have ?? about some SPOF into the graph. So we extracted the AS path from the BGP update to see which ISCs were connected together and then extracted the French graph of ASes and tried to identify the SPOF into this graph. Here is the result for IPv4. We did also the same for IPv6. And as you can see, in the drawing the blue dots are the French ASes and the red one single point ?? meaning by that we all know in the fields coming from the routing table you have only the best routes, so if you have backup transit that is not used, it will not appear in the extract of the routing database. So it's supposed SPOF.

And after analysis the graph, we came to the idea that most of the French ASes are well interconnected together and there is not so many SPOF in building the Internet so the quite resilient looking at that. Now that we have an idea of the ecosystem, what we wanted to do is to go a little bit further, deeper and analyse the prefix announcements. So, a quick reminder about the prefix conflict.

If we have the AS 1 announcing green prefix we know that using the routing BGP you have to announce that it's exchanged between ASes, resulting in A3 knowing that the ?? will go through AS 2 and then 1 FS 4 announce the same prefix the first ID that come to your mind is there is a hijack but it's a bit quick because as, you know, AS 4 could be a system of protection on Anycast, maybe a customer, it can be a lot of things. So we need to use a grey term to qualify this. That is why we decided to equal it an event.

So, we graphed the ?? we analysed the event and now we want to validate the event and that is where the Whois database comes into action, so we request the Whois database on the prefix and we see that the AS 4 is qualified as validate origin, so the hijack that was called an event is now becoming valid. So we run the detection of prefix conflict announcements during the whole year of 2012 and we came with nearly 3,600 of events. We validated near 29% of events as being valid through ?? we still have ?? so this is a bit odd. So we were ?? we went a little bit deeper into the announcements and analysed the AS path and we saw a lot of announcements we found the hijacker AS and the hijacked one into the same announce, so that means it's for most of the, that it is a lack of declaration of route objects. So if the operator declared the route objects most of the 48% in blue on this graph will go to the valid one. We still have 23% of abnormal events. We don't know yet if it is an hijack and it represents nearly 800 of events. We manually adjust these events, sorry, and it came that only seven from the more events were real hijack. Most of them were confirmed by French operators during the meeting we had with them. As you can see the routing table and Whois database is really important for us. So we decided to go a little bit deeper into these two databases. That is why we decided to make the cross?check of routing table and the Whois database. Two sets. We made the intersection of the two sets and we extracted two different indicators, the first one being the Whois consistency, the second one the prefix filtering using route objects.

Regarding the Whois consistency, we came to the conclusion that nearly 31% of route objects were never used during the whole year, meaning that there is a lack of clean into the RIPE data. Also, and it is the most important, we went to the prefix filtering indicator and compared the prefix to the Whois database to see if they were covered by a route objects. And we came to the conclusion of 15% of the prefixes could be black holed if the transit providers providing traffic for the set prefix became strict and make prefix list based on the route objects.

So, the recommendation we have is that the prefixes should always be covered by route objects and also, what we want to show here is that it is a preliminary step to RPKI, meaning that declaring route objects is quite easy and simple but it's not done by everybody, so we will do the people do the work for RPKI, that is why we want to push the BGPs about doing the stuff regarding the declarations.

Guillaume will now follow with DNS.

GUILLAUME VALADON: So now we will present you what we did, technical indicators related to DNS. By the way, on the report you will find some information about the state of deployment in France, globally known, but we manage to find that. So, DNS as, you know, is critical and vital for ?? critical for the Internet as a BGP protocol.

So let's explain what we did. So we did two kinds of measurements, the first one are active measurements, in this case we use a tool developed by AFNIC, which is DNSdelve you can download on?line, and data source is French domains extracted from the whole French zone. We built some indicators, we did active measurements, for example number of DNS servers and number of IPv6 services, the state of deployment of DNSSEC. We are also using passive measurement, in this case a tool is called DNSmezzo, and in this case the data request received by AFNIC. We are using this data and this tool to get the full indicators, for example we are able to find out the number of servers that are still able to monitor the attack, or the number of IPv6 requests received by AFNIC servers.

Today I will just present a small set of indicators, you can find more on the report and also we can discuss them later.

So this one is really interesting, regarding BCPs. We took the French domains and we managed to compute the number of servers per French domain and here is the the result is really, really interesting because most of the people have two or more servers, meaning by that almost no one operates a French domain with only one server, which will be really, really bad for resilience because if the server disappear all of the services will disappear in the same way.

First conclusion, there are enough DNS servers per French domains. The next question is will be a good result if we do the same observation using Autonomous System, so what we did next we took again the French domains and then we managed to compute the number of Autonomous Systems per domain. As you can see here, the result is not so good because 80%, more than 80% of the French domain are allocating in single Autonomous Systems, could be different but at the end all of their servers are in the same place.

As you may know last year there was a good incidents that showed that putting all your servers in the same Autonomous System could be quite bad, at least if you don't know what you are doing. So from this perspective, what we saw here that most domain allocating one AS, it's quite bad, could be improved by recommendation of deploying DNS BCPs. Another indicator you can find the state, so quickly DNSSEC aims at preventing ?? digitally signing DNS records, so in order to understand this indicator, we need to present you the state of deployment of DNSSEC in France, the French one was signed and published more than three years ago, and the French zone, AFNIC, is accepting signed delegations since April 2011. So in theory, in October 2013, today, all French domain could be signed by now. However, in practice, what we found out, 1.5% of the whole French zone is actually signed. It could like be a higher number, it's around 30,000 domains, I guess; however, this 1.5% is only due because of single French registry that enabled DNSSEC by default for its clients. For us that is a really nice result, by observing data we are able to understand all the French Internet ecosystem is working and finding out one decision by one operator we also influence, for example, DNSSEC results.

So DNSSEC is not widely deployed.

Next thing we did was to trying to find out the state of deployment of IPv6 regarding the French zone. So what we did here, we managed to compute the number of DNS servers with IPv6 enabled like in a zone ?? in a domain, number of mail servers, records with quality of IPv6 address and we did the same for the web. From this picture what we can see that is more than ?? 60% in twelve of the DNS server have DNS enable, it's not the same for web server and mail server. IPv6 is growing slowly, however that is mainly the case for DNS servers, like web and mail it's still quite slow.

So there is insufficient IPv6 deployment and I guess our recommendation, we need to work on that and push people to deploy.

Next off we can do is to study the data received by AFNIC DNS server. In this case two separate indicators, first one concerns the transport part, more or less the IP version preferred by DNS servers so what we can see here that is a little more than 10% are using IPv6 to create AFNIC server. So again, the trend is quite slow.

And another thing we can look at, looking at the data received by AFNIC servers at the request, in this case it more or less reflects IP versions preferred by clients, even so we have some French traitors providing IPv6 to clients, most of the queries are related to IPv4, so the clients are asking IPv4 questions from their devices.

So results, most traffic and queries are still related to IPv4.

We now switch to the conclusion and recommendations, report we published in July. And this report said for the BGP and DNS protocols, stages of the French Internet or Internet in France is quite OK. However, there is no evidence it will be true in the future so that is why based on this analysis and indicator, we decided to do some recommendation and we decided to do this recommendation by discussing with our operators. First one is to declare what objects and keep them up?to?date in order to ?? in France filtering based on route object declaration. We are also pushing people to deploy v6 in order to anticipate problems, problem can be anything from security issues to money, like cost of deployment, like it was presented this morning. Also, we are pushing people to apply BGP?based current practices, we didn't present that today but we did analyse this and some French operators still like /32 prefixes goes through the Internet, some of them are still letting private Autonomous System go through the Internet so again there is some improvement to do. And regarding DNS, we are pushing people to distribute their ?? DNS server across several ASes.

So future work: We build a lot of tools, related to BGP we would like to more routing table, today we are working on small subsets of ASes, which is a French Internet, that is mainly to prove our point and we would like to schedule full routing table. Today it takes around four days to pass full amount from and we want to decrease that to one day. Reduce indicators limitation because from time to time what we do is not perfect and we want to change that. As François said, we are only using the collector from London, some time ago we did some analysis and the result we got was from London, that is something we want to improve, the next report will use at least three collectors. The next report will be published in mid?2014, it will present data regarding the year 2013, indicators of course will be announced and some parts will be in English because so far I am sorry but that is only in French, so we write at least some parts, maybe the whole report in English. So again, we publish two reports, one in mid?July two?and?a?half year ago, you can come and talk to us. That is Stefan somewhere, François and me and also one or two weeks ago we published a report on ?? not a report, sorry, a document related to BGP best current practices, that was discussed and written by us, the observatory member, meaning an see, AFNIC, as well as the French operators. If you have questions, I think we have plenty of answers.
(Applause)

MIKE HUGHES: We do have time for questions.

Vesna Manojlovic: It's not a question, just a comment, I am very happy that you are using RIPE NCC tools and data and and thank you for finding very good use case for it and for reporting tonne and I am looking forward to the cooperation in the future.

GUILLAUME VALADON: Thank you.

Remco van Mook: So, I have a question which is maybe a comment, which, I don't know. So I am confused as to what the French Internet is and who runs it and how we actually connect it to the real Internet, because apparently it's something else.

GUILLAUME VALADON: I don't need to go back to the end. So when we started the observatory we only focus on four major ISPs providing connectivity so that was two years ago. It was not like the French Internet but it was the beginning of our work and when we did the second report, we said there is maybe more people, we know that because both François and I work for ISPs and then we tried to write down a list of people who are used to doing and we came up with 50 Autonomous Systems who are French who are people with we know and that wasn't enough. We took the list from the RIPE website, the Whois database, with the French country codes close to the Autonomous System, aut?num objects and we saw in this list we did manually and this list according to RIPE there was some and we dedecided and the main part of the methodology is to look for French ASes regarding admin, contact administrator, technical contacts, organisation and so on.

Remco: That is good but I still don't understand whether this is about networks operating in France, networks connecting to each other in France, French networks connecting to each other outside of France.

GUILLAUME VALADON: That is everything you said.

Remco: This is about some random cross?section of the Internet.

GUILLAUME VALADON: That is basically a subset of the Internet and we believe this is representative of people walking on the Internet ?? working on the Internet in France, meaning people actually providing services, French company providing services, ISPs selling mobile connectivity, so it's a small subset of the Internet. Main goal was to start slowly trying to build our tools, at least you are up next year and maybe bigger later.

Remco: Right. So my concern with language like the French Internet, is that actually people who write laws about this stuff actually hold this to be true and have their own sort of preconceptions about what that means and what it is and how they can control it. So, going forward about what you are trying to achieve in writing recommendations, I am slightly concerned with a government talking about people should be using RPKI and perhaps main dating that. Ruediger is no.

GUILLAUME VALADON: We are not there yet. You are making the step for us.

Remco: And so, maybe something you might want to include in your 2014 report if you are looking at making the Internet more secure, is following the lead from your French colleague ?? from your Finnish colleagues and main dating the use of BCP 38 in French networks. Thank you.

GUILLAUME VALADON: Yes, that is something ?? we could observe that, use anti?spoofing metrics, we like to be able to measure that, perhaps with probes be able to do some part of this.

AUDIENCE SPEAKER: Martin from AFNIC. Just to add something to answer the concern. Actually when we started this work, we happened to share and we still share a nimble view of this French Internet so we are not claiming we see everything but we are just trying to start somewhere, so for example, as far as AFNIC and data for is concerned, is only third of domain names in French, (.FR) we are not claiming it's the French domain names but based on that sample, we believe we can learn many things, waiting for having more and more to learn, so we would like to share this point of view and have also your feedback, whether there are some biases or whether there are things who can't match if applied in other regions. So it's very interesting to know whether this approach can be a generalised or not and if not, which ?? under which conditions.

MIKE HUGHES: I actually still have a question that is sort of not answered. You had the slide with the single point of failure ASes, the red dots. Are they French ASes?

GUILLAUME VALADON: No, that is a got a pointed, because as I said before we start with four ASes which are like French because we can buy services from them. Then we tried to expand that using our tool to find more French ASes, and then we build a graph and what we can learn from the graph, maybe you can't read that, some of the French ?? some of these dots are not French; I mean, according to our definition of French Internet, French ASes, which is really good for us security agency because it means that maybe we are not talking to all of the actors, network actors involved in France, like American, Japanese one or one from sue Dan, I won't give name, providing and selling transit in France and so far we are not talking to these people. It's important to be able to map and understand how people are working and so on.

MIKE HUGHES: That sort ?? thanks, that led into the other thing that crossed my mind, which was obviously French speaking people read a lot of French language content. Do off feel for how much French language content that is out there, what percentage that have is hosted in France and what percentage is hosted elsewhere in the Internet and therefore, is actually the connectivity of France to the rest of the Internet as important if not more important than the stability of connectivity in France.

GUILLAUME VALADON: So far we don't have access with information, one way to do it could be to crawl the pages written in French in Google and get the name and the IP and the ASes, we don't access to this kind of information. We are talking with people, there is a French network operator group. Some people came to us doing cloud measurement and so on, they might be able to give us information about content language and AS names so at some point we might be able to say there is some French language contained on this system and I think it's partially to answer your question. We don't know to get this information.

MIKE HUGHES: Yes. Right.

Elvis Velea: One question about the AS numbers, you are saying that they are French or some of them are French.

GUILLAUME VALADON: Yes.

Elvis Velea: I do not have ?? do not have a country code attached to them so I suppose you are looking at country code from the IP addresses that they are announcing? Which value allocation or something like that on the FTP.

AUDIENCE SPEAKER: So you are using extended delegated files.

GUILLAUME VALADON: That was the first database we used to and build our tool we are using let's say physical addresses declared on aut?num and so on. So François said that compared with existing database and existing database was RIPE letters declaration, I forgot the name of the file.

AUDIENCE SPEAKER: Delegation extended file.

CHAIR: What is the response of the French operators or Internet companies to your reports? Do you see things getting fixed after your reports?

GUILLAUME VALADON: That is good point. We do a meeting with them, we do observatory meeting twice or three times a year and we invite operators and they come. So I think that is quite good see operators coming and talking with us and on the improvement side something we didn't present today, we started to do something we call individual reports because the reports are an ?? however, from time to time not much information you can get as operator, we provided non?anonymised individual reports to people asking, so what we call French AS number can come to us, and we will give you the reports. So, and that is helpful because some ?? for example, François talked about the connected category of events and most of them, that is a problem with what object declaration and we know some operators are actually working on that to improve their route object declaration using our reports so we don't know if it will be OK but next year we can check that. We will know if our work has some effect on the French Internet based on our technical indicators. We don't know yet.

MIKE HUGHES: OK. Right, well, if there are no other questions I would like to thank Guillaume and François for that. Thank you.
(Applause)

And therefore I would like to introduce our last speaker of this session, Jari Arkko who is the IETF Chair. There has been quite a lot of stuff in the news about this, so hopefully we have got plenty of time for questions.

JARI ARKKO: Thank you. Me and my colleague Stephen and Richard put together this presentation to talk about the effects of the espionage crisis or large scale monitoring of Internet users and their traffic and this is largely from a technical perspective, what is the implication for the technical community, you guys, the IETF and so forth. And I will be speaking about things that we are doing at the IETF but we also have lots of opinions in the slides so it shouldn't necessarily take this presentation as the official opinion.

So what I am going to cover is go through what we know and talk a little bit about the implications and then go into the part what can we do or should we do something as the technical community and I think I have a half an hour, my goal is to finish the slides in 15 minutes and tend I have a call for action and I would like to get some discussion going, I am sure you guys have some opinions.

So the background: I mean, obviously this year's leaks or allegations about some activities by NSA and others and it's sort of been a media circus kind of around the world but I did want to point out that this is a wider issue, so first of all there is legal interception is a well?known affair and basically in every country that is possible. However, usefully illegal interception and quite targeted towards a particular set of users rather than wholesale, capture of Internet traffic.

And secondly, even if you talk about boloc interception or wholesale interception of Internet traffic, even that is happening around the world in different forums, some kind of even passed laws about it that they can capture traffic going through them, and some other countries not just capture the traffic but also do something with it like stop your connections to harmful sites and such. And in general, for those in the know, this is not a surprise that some things like this are happening but it is a surprise that the scale has been potentially or, we had no certain information but the reports say that it's pretty big scale, so basically talk about something that affects every Internet user to a large extent. And some of the tactics have also been surprising, like trying to effect international standards or standards that could be intentionally weakened and there has been a bunch of reactions on this, you know obviously Internet users, I think, individual people in the world have noticed and maybe worry even more about the security of the Internet. There has been lots of political post erring around the world, obviously. And Internet governance discussions have been affected; I mean, this is used as an argument there as well, several places, some countries, for instance, have called for what I would call a more diverse Internet, so they implement some specific actions like more Internet Exchange points in their countries and more cables going all around the place, more direct connectivity and more services around the world. That is all good, these are very good things from not just for avoiding some espionage issues but also for speed and price of the Internet, it's good to have more connectivity. But I have sort of gathered there is also some desires to provide, this is a reason to make a more national Internet kind of thing for some places and I think that would be a disaster for the global Internet and for the industry and for the people who basically depend on the global Internet to do all kinds of things and not just services in their country.

The intelligence agencies have also been affected. So if you think about what is going on in NSA or other organisations, the ones that were not doing this have now a very bad case of NSA envy and they will soon be doing this, they will go to governments and law makers and budgets and everything and ask for a way to do this.

So that is going on. And of course, us, the technical community, maybe thinking, is there something we should be doing? And that is the main topic of this presentation.

But a little bit of a scope first. This is into the political discussion, we are not here to bash anything or criticise activities, it's not our goal but we do need to understand what the real dangers in the Internet are for our packets. And we should have an understanding of what to do about it or how should the Internet technology evolve in order to support secure private Internet for our users.

And I also wanted to make a claim that these really are attacks, or at least indistinguishable from attacks. We don't know what the information that could be retrieved could be used for, it could be good or bad things, certainly bad things for information if you open up information and see what is inside, thieves stealing passwords and many things like that. So, once the information is out there, bad things could be done and it's not just ?? I mean, if something is possible for some organisation, it's possible for others as well, so I think we should worry about this and the claim is anything indistinguishable from an attack must be considered as an attack.

Let's talk about some of the technical things. Obviously there is very little information, we can't really trust what the newspapers are saying or the various experts really because they have no certain information, can't really trust the governments, they seem to be occasionally from letting ?? their statements in a very nice way and that is not always reliable. So what are the possible attack types:

Unprotected communications. Well, if you don't protect your communications then people in the path can see your packets. What a revelation. The other thing; you may have direct access to the peer that you are communicating w? you are working with e?mail provider or social network site or whatever and wherever they are operating, if they are forced to hand over your traffic, there is nothing we can do. Direct access to keys, there was a bit of news about an e?mail provider called Lava bit that went out of business, possibly because they were being forced to hand over keys, it's not just handing over traffic but also an issue of handing over some key material that could be used by others to open the data.

Third parties. There has been some news story, very hard to determine if they are true but a compromised CAs leading certificates being given to major Internet organisations and then those fake certificates being used to subvert someone's communications that was supposedly going to this important organisations, but it wasn't.

Implementation back doors on any level from firmware to various pieces of software, one particularly nice way of doing this is if you can effect the random number generator and say something that you use for generating some keys, it looks good but it actually is not as random as you thought, and then the task of brute forcing your communication comes a little more possible.

Vulnerable standards, we at the IETF and elsewhere worry about this claim quite a bit.

So one or the only public case that we know of, or at least that I know of, is this nist random number generation algorithms and one of them is actually came out of contribution by the NSA was recently stated by the nist that this is not recommended for use any more. So that is one thing. We can certainly, once we know, learn of something like that we can take it out. From implementations and other standards as such. We have weak crypto and that isn't a new thing. IETF, you guys, everything else has been working on the security of the Internet for a long time and we keep on deprecating bad algorithms because new information comes around. And this is really the same situation once again. One particular thing that people are worried about is RC4 that might be perhaps vulnerable, I am not the expert on the crypto part but it's, while it's old, it's still quite frequently used by TLS connections so it's an attractive target for an attack which probably deprecates that. There has been some other claims about vulnerability implemented into standards, IETF standards, and in many of these cases I have either talked to the people involved or been myself a little bit involved personally and at least our perception has to be that that is probably unlikely that don't believe everything that is said in the press about these kinds of things or claimed by someone. I mean, that is not to say that there aren't problems in like things IPsec, there are certainly main problems and I have complained about those myself and usability and many other things, but that is not necessarily because someone planted an issue there; it's more like it's a hard problem or we were not smart enough to figure out how to do it right.

What can we do? Technology may help but it's important for us to understand the limitation of technology, so if I have a perfect communications scheme for talking to you and we exchange, securely, information, if one of us leaks the information then we are not very secure, right? So there is some fundamental limits to what we can really do with technology. But nevertheless, it is believed, I think, by me and many other people, that it would be useful to provide some technical means to go further than we have done so far. We can prevent some attacks, perhaps even many attacks, and important part is also if we do things right, then we can perhaps move the costs of doing something like this by a criminal organisation or the intelligence organisation or whatever, make the cost a little bit higher and not necessarily in monetary terms. One is you could from passive attacks to active attacks and increase the rate of detection. Also PR?wise, it's kind of important that the Internet community acts. We need to be doing and seeing doing as much as we can because I mean, we are kind of responsible for the security of the Internet and the time window for doing something is now when people are actually interested in this topic.

What are we doing at the IETF: We are obviously discussing the topic openly, as is our style. We have at least called per pass, I think purposive passive monitoring. For the IETF 88 meeting coming up we have the technical plenary dedicated on this. We have several Working Groups discussing related matters, the IAB is going to organise a workshop maybe next March and other forums like this one where we talk about it. We want to work on the problem, threats, potential solutions, we have some specific proposals, I won't go into the specifics, as an example we are working on a new set of recommendations what should you do in TLS and what should you you deprecate and require. There is some ongoing efforts with potentially high impact to this matter. Http, Working Group working on 270 is, I think a pretty significant change into the web protocol stack and one of the things that we are discussing there is potentially mandatory security.

TLS 1.3 also started far before this thing came to the publicity. You can read two parts of it and hopefully going to be more secure so those are going to be very helpful, I think. Some directions for what can we do. Unpredicted communications if you can get rid of them that is good. Vulnerable standards the main is public review, if we all look at the ?? whatever proposals come out and this wide review, that is the defence against anyone trying to accidentally or maliciously break them.

We also need to decommission protocol maybe but at least algorithms on a speedy timescale and maybe this is a good time to do some additional review of old things, I mean the least of things inside of it. LS or IPsec is just huge. There is tonnes of stuff. And I don't know if anyone really knows if all of it is good. Make sure that we actually look at that.

Implementation back doors. That is a difficult one. Diversity of implementation, if you all don't just rely on one thing, then I think that is going to help. It was raised earlier about everyone, lots of people on the planet relying on one DNS server address and another example. OpenSource would help, I think. Review of different sorts, both for OpenSource and closed source and that can be also, there is processes and reviews for that as well.

And then sort of get to my call for action kind of, so I mean, this is ?? I guess a painful crisis for lots of people and maybe a disappointment in some sense and some other people suffering in various ways and it worries us and its impacts worry us, but it's also a reminder that there are some challenges in Internet security and of course, I am sure I am the first one to tell you about that. But maybe this is the time to take it even more seriously than we have done in the past. And if you think about the Internet traffic in general, what is the most traffic about and my observation at least is that security is by default, so the Internet is insecure by default, it's only if you do banking or something special that you actually even attempt to turn security on, like for web traffic.

Maybe this could be reversed. What if it was by default secure and you say well that is a big change that can't be done. I say no, maybe this is the perfect storm where this actually could be done because first of all there is this PR thing, you never raised a good crisis to do something and so this motivation and the web stack is undergoing a a lot of change, TLS traffic is for various reasons before this in RICE, so I think this is more doable now than it was, say, last year or five years ago. And maybe we should do it and, that of course is just our opinion; it could be that there is ?? or I am sure there is many challenges on the way but I thought I'd take this opportunity to ask that we actually try to do this as opposed to just complain about the espionage crisis, we should go and do something about it, and it's not to react to a particular problem or a particular crisis; it's just one symptom in the overall picture, that we should fix the fundamental issue. So with that, the only thing that I want to remind people of, is we have the IETF meetings coming up in Vancouver, nice city, a couple of weeks from now. Interesting programme in many different ways, including this topic. This topic is going to be discussed in the technical plenary on Wednesday, includes IAB and so forth, it's going to be very interesting and on that day we are going to do BoF on the per pass topic and other Working Groups meeting through the week, touching on this thing as well as other topics. So, that is what I had to say, basically, provide hopefully some high level information what we know and what we don't know and trying to energize to you do something about it or do something about the security of the Internet. I am sure we have sometime for discussion.

MIKE HUGHES: Plenty of time for discussion.

Vesna: There was a comment from chat, from Sasha who speaks for himself and he asked me to relay this comment, so you can divorce the technical and political issues because what is to stop a government from outlawing encryption or forcing hand over of the keys as is the case in the UK already.

JARI ARKKO: That is true of course, but I think we can have general purpose Internet technology that is secure and we already have that to a large extent but the thing is if the technology is such that it's very, very easy to look at your traffic, for instance, in the clear, then there is really no barrier for doing bad things by the criminals or whoever. But if you actually become secure in crypt graph I can sense then the barrier for doing something is bigger, so as an example, the threat of getting caught doing this is increased, so I mean, I agree that there is ?? you can't divorce the different parts from each other but it's not an excuse to do anything.

Scott Leibrand: Limelight Networks. I totally get we can encrypt all of the Internet with HTTPS. I am not convinced that is a good idea. Having been on large scale systems that do http and HTP PS it's quite clear HTP PS has significantly moreover head and troubleshooting more difficult and it often is done for absolutely no reason except to check a box on someone's security audit. And in particular I am talking about data that isn't personal data, it's media files, it's images, it's things that really doesn't need to be encrypted, perhaps it needs to be signed but that is a different issue, much of this content already has DRM so I would caution you to set your target appropriately. 90% of the bits on the wire being on port 443 is I think a very bad target. 90% of web page requests I think might be a more reasonable one because web pages, if they have any personal information at all, so just be careful not to throw too much additional technical complexity up to make our lives more difficult in a situation where it may not be required.

JARI ARKKO: Yes, that is an excellent comment that I actually agree and I did say that there are some challenges and I didn't really go into the technical details and no one has those technical details, but I will just point out there is many ways of doing this. There is some reasons why we have middle boxes as an example, some of them do caching, and depending on how you implement security you might either enable caching or not or other similar things, so maybe it's not TLS and HTP PS but it might be sign and encrypt your objects as an example that, might be one avenue. But again I really don't have the details we need to work on that.

RANDY BUSH: I only have one point but now I have two. I disagree with that. You can presume that the attacker has some ability to decrypt if you do not by default encrypt everything then you tell them exactly where to apply their resources. But the other half is we are a bunch of operators here, you should know that your web server, your SMTP server, your I map server, all have crypto algorithm specifications in the configuration files that you have ignored because who the hell knew what that stupid string was, and that stupid string, among other things, enables our C 4, etc., etc., go look into that or if you want some recommendations I will even take the e?mail or we can start writing a document but choose the crypto algorithms for your web server, your SMTP server and I map server, etc., today. You can do that today and seriously improve your operational security, as ?? DNS A didn't go after the math, they went after operational weakness.

JARI ARKKO: Thank you, very wise words again.

LEE HOWARD: Thanks again for being here. I agree that we need to work on the tools, crypto is an arm's race, is always being defeated and will always be as the mechanisms for defeating it get bigger and bigger. The point that ?? main point you are making we need greater deployment of tools and obviously there is a gap, we need to use the tools that we have, we can't deploy tools we don't have, we need to work on more tools. But the biggest thing that I missed in what you were talking about is the capture of med at that data and even if you incrept the data it doesn't matter if you can see what the end points are you have a pretty good idea of what is going on or who is communicating with whom and there is a place we need to apply some expertise. I also am getting that the Wednesday 9 a.m. tech plenary will not be an open bar. Maybe it will.

JARI ARKKO: Yes, good points, thank you.

Vesna: Another message from iOS. I am reading a comment from Sebastian from Norris network AG, the problem with this and this might be a bit pessimistic, is that the current political leaders are not willing to do a lot against the situation and there is not a lot we can do from a technical perspective against that. I think there also should be a lot more political lobbying from us.

JARI ARKKO: Yes, I mean, and that is a good point, political lobbying by some, some parties, not necessarily like the RIPE forum or IETF. But, could get the same people perhaps. But at the same time, and this is ?? we have discuss this had with Bruce and others, for instance, and the issue really is if you have multiple venues of attack and make some improvements here and there, there is no single improvement, technical or otherwise, that will forever fix this problem but if you do multiple things then the attacks, whether criminal or otherwise, have to move to the other ones and so it is kind of an arm's raise ?? you force the other player to do harder things by blocking a particular path of attack.



AUDIENCE SPEAKER: Martin, European citizen, talking for myself. So, I am going to be a little bit ironic, so if I can understand your recommendations setting high goal of 90%. Maybe it will be reached very soon because actually having HTTPS to gmail, Facebook, it's all going ?? and what comes after that. It would be passed to some unknown party, so is it a goal policy?

JARI ARKKO: That is a good point, but I mean, again, there is no single solution that will fix all of this. You know, you can block some parties from accessing your traffic by going encrypted, it doesn't block everyone. You may need some political action or development in the world or more diverse Internet in some sense to fix that. But if you don't protect your traffic to begin with, then you sort of can blame yourself.

RANDY BUSH: IIJ. I agree with Sasha and Sebastian that it is both a political problem and a technical problem but the political one is a bit blacker, but what we don't want and would be very embarrassing is for the politicians to say, okay we want to move forward on this and we haven't provided the technology, right? So we need to clean our side of the street.

JARI ARKKO: I am sure there is many technical organisations would be eager to jump in and say, ah but we have done this telco thing, 50 years ago and we can do a secure system for you.

RANDY BUSH: I remember blue boxes.

JARI ARKKO: Yes, we won't worry about that.

AUDIENCE SPEAKER: Bart de Bruijn, Hibernian networks. Earlier on you mentioned diversification of networks would lead to more security but how exactly is that? Because if you look at the revelations by Snowdon and the massive scale taps that are taking place, all traffic traverses some fibre somewhere and when that is tapped, how does diversifying your network help except for increasing costs for everybody?

JARI ARKKO: Well, I mean, of course there is some communications links everywhere but the question is, who has access to them, you know, to give you an example: In Finland there has been some talk about building or rebuilding additional C cables, directed to Germany not through Sweden, for instance, and that, when you sort of increase the connectivity that means you take some players out of the game, in this case I don't know, if Swedish Government is spying on Finnish citizens' Internet traffic but they aren't doing it after the cable is constructed.

AUDIENCE SPEAKER: I could imagine not everybody has the budget to put in their own sea cables.

JARI ARKKO: Of course not but the larger point is that the Internet is in any case developing to be more connected, it's good for us for many reasons, right. You don't have to go through one link to get from South America to the rest of the world; you could actually have multipaths and we are going to need that for other reasons too, so again it's not a solution that prevents our problems; it's something that helps a little bit.



ALEX LE HEUX: Speaking for myself. It's all very fine discussing crypto protocols and algorithms, but how relevant is that when there is this small number of guy began particular companies that know all our search history, have all our e?mail, know who we talk to, etc. And the attack service they presents on the technical side is maybe not so large but on the political and policy side is huge, so you can encrypt all the traffic on the Internet but all that data will still be available, easy to get from a handful of places.

JARI ARKKO: Yes, I feel I am repeating myself, but not a single tool but multiple tools together. So more technology and maybe some legal things or political things, a more diverse Internet that you don't give or we don't all use the same provider for everything. Those would be good things. Together this would actually help. But even if it doesn't help, I mean if you don't have all of the parts there will be some help from technical improvements and again this isn't necessarily just a reaction to this year's crisis; we would need this otherwise, too, for ?? just for preventing random criminals from accessing your traffic.

Alexander: The question is we discussed communication security and cryptography but what about hardware, how can I be sure that platforms ?? wasn't tapering that my random number generator wasn't tapered with and currently was with no open CPU and that sort of thing, I could not be sure what we can do.

MIKE HUGHES: Randy was saying what about compiler, other parts of the system. Things you can't necessarily see.

JARI ARKKO: All good questions and I have a feeling that Randy has some answer for that, actually. We should not despair just because we have multiple difficulties. I mean, we can improve the situation, we need to improve the security, in any case, right?

AUDIENCE SPEAKER: One short sidenote: With law enforcements in Russia lately we see that SSO traffic, HTTPS traffic is growing much more rapidly in Russia at least so people getting the message we are moving to HTTPS faster so getting there.

JARI ARKKO: Yes, see that elsewhere too.

MIKE HUGHES: That was directly in response to this point?

RICHARD BARNES: Also part of the Internet engineering steering group. I just want to re?emphasise something Jari said a couple of times: The name of the game here is making the job of the attacker harder. Yes, it's still conceptually possible for someone to mess with your hardware but if they have to do that because you are encrypting your traffic and you are always exchanging signed objects with your service provider then you have made their lot more expensive, they can't go to a provider and ask for it, encrypting traffic on the wire and more content is encrypted, is improving security by making it, attacking the system more expensive and requiring these intelligence agencies and law enforcement agencies to spend more money per bit of content so they can't do that, so it's infeasible for them to do this large scale bulk surveillance.

MIKE HUGHES: Your point is right now, this information is being hoovered up it's cheap to hoover up.

RICHARD BARNES: And all the things we are talking about increasing that cost.

RANDY BUSH: Back to the previous point. The whole ?? nobody is auditing the tool chain from the command line down to the hardware. Go read Brian Curnigan's touring award paper on trust and compilers and why you can't and why that is death. Now read David Wheeler's paper on double?crossing and being able to actually get a compiler you have some faith in. But there is the rest of the tool chain, indeed all the way down to the CPU and all the way down to random number generator on the Intel chips etc. Tore project is not built on a validated tool chain. I have been looking for a month now for validated tool chains. There don't seem to be any. This is a little scary.

JARI ARKKO: Thanks.

AUDIENCE SPEAKER: Yannis Nikolopoulos speaking for myself. I think we do need very urgently a political solution because as long as you have secret orders, be it in Germany or the US, against XIPs or providers, you can do whatever you want, if you have 90% encrypted traffic there will be some law enforcements telling us that indeed it is needed that they have to encrypt ?? decrypt the traffic and there will of course be access by the secret services around the world to this data, be it legal or not. So, 90% encrypted traffic is not helping us in the security us from the secret traffic because they will have secret orders to access the data. Just warning.

JARI ARKKO: Again increase the cost will not solve the whole problem, at least not alone.

AUDIENCE SPEAKER: It's not costly for them to come to with you a secret order and they have more arguments that it's needed for law enforcement, for child trafficking, of course to access the data.

JARI ARKKO: Yeah, maybe ?? in some or even many cases but you can still protect your data if you ?? for instance, if you trust a particular CA more than the other ones or you have certificates for the other side, web serve and so forth. Yes, there are problems.

JIM REID: Speaking for myself. Jari, you mentioned earlier that technology was the answer, perhaps part of the answer. I have a problem with that because it's more to do with social and political things rather than technology, and we can talk very much about using this crypto scheme and that one and whether one is better than some kind of anonymising technology but the wrong debate I think the question is who is getting access to our data and it's not necessarily the stuff we are encrypting but also the other stuff, what is Google doing with the data we present to them, what is ebay and Facebook, what is going on with all those cookies.

JARI ARKKO: Yes always do, yes.

JIM REID: What is going on with all that stuff and how can I be then anonymous, even if I take reasonable steps at encryption and all the rest of it when all this other data is out there. I think we have got to be careful and not be blind to the fact that technology is not going to solve those particular problems. You mentioned earlier the attacker, we didn't really define who it is, is it the hackers or the governments, is it Google?

JARI ARKKO: From my perspective, the attacker ?? I mean, it's ?? I mean, first of all, I said technology alone does not help. I think, I still think it helps somewhat in terms of increasing the cost together with other mechanisms or other types of approaches it will help even more. But it's not a solution by itself and regarding who the attackers are, that was kind of my point, that if someone is opening your encrypted channels for instance, that could be done by many parties and it's all bad, it's an attack, no matter what and we need to worry about preventing that, it's our job.

JIM REID: Or buying tinfoil hats.

MIKE HUGHES: Geoff, final comments and the mics are closed.

GEOFF HUSTON: Taxpayer.

JARI ARKKO: Geoff, I didn't know that you were funding all this.

GEOFF HUSTON: As you, too, and you are talking about increasing the costs to agencies ??

JARI ARKKO: Wait a minute, I am a taxpayer in Finland and that is one of the countries where the intelligence agency is having a case of NSA envy and those guys can do stuff please give us our way, too.

GEOFF HUSTON: My point is, it's not them and us in some cases; the folk who are doing this work were actually agencies funded by you and me as taxpayers.

JARI ARKKO: Yes.

GEOFF HUSTON: And you kind of wonder exactly what the economy is we are operating in and why we need to increase these costs. I notice just to think about this slightly more, that in the US personal protection is actually part of the US department of trade. And this whole issue of personal privacy and protection of data is not about ethics of doing this, it's about doing what you said you were going to do. It's honesty and advertising. And in some ways I think we are being quite deceitful with ourselves that we give this MIT of privacy and security when in actual fact a lot of it is MIT, that realistically this public communications environment is incredibly public (M Y T H) and the unintentional exhaust from the data what have we do, every innocence query, lays a rich trace and a rich track of everything we do. Thinking somehow that I secure one bit and I have got privacy is naive, and in fact, being a little bit more honest and open about the fact this thank this network is like the public space out there on the street that almost everything you do is observable and start from that tenant, rather than trying to think that technology can invent me a cloak of invisibility is perhaps a little more honest. Thank you.

JARI ARKKO: Wise words. Thank you, I will just point out one thing, which is, you know, cost wasn't just monetary cost, it's also other types of cost, being embarrassed by getting caught.

MIKE HUGHES: So thank you. One final remark.

AUDIENCE SPEAKER: Thank you very much. We e?mailed quite recently. I think I have been hearing a lot of things and I am not a technical person, what I have been hearing is this: The question is who is your enemy here. And we heard three different ones, the first is large corporation that is pick up everything that we do and sell it and the second one is governments, are they the real enemy or not; the third one is criminals that are taking everything quite easily from the Internet so. Basically who do you want to fight of these three and I would go for the third one because they do the most harm and the second one you can condition probably even by making the barriers a little bit higher so they only come back to companies with valid questions, I need this because of that and the law supports it. And the first one, that is up to governments to discipline if they want to or not. So I think that is hopefully a little bit of a tie and what is your main priority, I think that is the question here?

JARI ARKKO: Thank you.

MIKE HUGHES: And that leads us very nicely into the coffee break, I think, Jari thank you very much for doing this session. Thank you.
(Applause)

We reconvene in here at 4p.m.. thank you.