Archives

These are unedited transcripts and may contain errors.


Plenary Session
11 a.m. 15 October, 2013.

CHAIR: Good morning, we are about to start our second session, which is about to start at eleven o'clock, which is about now. So, please take your seats and fasten your belts... someone suggested.

Well, our first presentation will be not about probably how to break the bank but how to make yourself more resistant and it will talk about so?called Triple Crown attack on financial sector and I would like to invite, Roland Dobbins.

ROLAND DOBBINS: My name is Roland Dobbins, I work for a company called ARBOR Networks doing network visibility stuff and DDoS mitigation, and I'm here to talk to you today about a very high profile DDoS attack campaign which was launched against US financial institutions starting in late 2012 and moving on into 2013. It was originally called the Triple Crown series of attacks, kind of branded by folks in the security community that way because there were three main attack methodologies that the attackers used initially. Later on, based on communications from someone, who had demonstrable foreknowledge of the attacks and posted publicly, the security industry seemed to adopt the term "Operation [Obabile]," which is what the attackers themselves referred to the attack campaign as so we have decided to use that nomenclature.

Just to be clear, most folks here know what a DDoS attack is. It's an attack that consumes finite resources, exhaust state. It's an attack against availability. So it's DDoS attacks of the three characteristics of information security, confidentiality, integrity and availability, DDoS attacks are attacks against availability. They are trying to take down the website or the DNS server or the game server or what have you. And so when we are defending against DDoS attacks what we're trying to do is maintain availability in the face of attack.

So, this story begins in late September of 2012, a group calling themselves Cyber Fighters of Izz ad?Din al?Qassam, posted on pastebin.com, manifesto of sorts, calling for attacks against US?based financial institutions supposedly in retaliation for the video trailer of a movie that does not actually exist, that was posted on YouTube, and again, originally these attacks were called Triple Crown, but we adopted the nomenclature of the attacker so we refer to them as operation of Abil. So the attackers announced in advance that they were going to launch a waive of DDoS attacks and then proceeded to do so and there were some distinct phasing of these attacks. The first phase lasted roughly from September through December, second phase we saw in December of 2012. At that point, someone who was in league with the attackers who the foreknowledge of the attacks posted on pastebin.com that the attacks were supposedly continue for another 56 weeks. There was a pause in the third wave of attacks. Supposedly there was going to be another ideologically group anonymous were going to launch an attack campaign that they called op USA in May and so whoever was working with the attackers behind the DDoS campaign were discussing today, they posted on paste bin and said well in, out of respect for this DDoS campaign that anonymous were announced for next week, we are going to pause our attacks for a week and then resume.

So, the anonymous attack never materialised. These attackers did pause their attack campaign as they promised but the pause continued for a long, long time. And unlike the prefaces, there was a very, very brief abortive phase 4 that kicked off in July of this year. It only lasted really for a few hours in one week and a few hours in the next week and so far since then we have not seen a resumption of this particular attack campaign, at least that can identified by the methodologies of the attackers as well as their tendency to post on paste bin about what they were going to do.

So, the evolution of the attack campaign over time. During Phase 1, the attackers would target one or two banks at a time. They were sending a lot of HTTP and https requests to the banks. They had done some reconsense ahead of time and they had identified CGI scripts that would run back in database searches that would spin up a lot of resource, they were also hitting the SSL login authentication subsystems for the banks and financial institutions and they were doing some DNS flooding but it wasn't really DNS flooding; they were sending large packets around 13 hundred odd bites in size that were mall formed DNS requests, so they had some of the elements of a legitimate DNS request but they were malformed, and the thing is they were packeting web servers with this traffic. This was actually the bulk of the traffic in terms of volume was this malformed UDP that they were hurling at the servers. And the layer 7 investigator was very effective because they had done their homework, they could cause problems with load balancers, with stateful firewalls that had been incorrectly set up in front of servers, and so forth, and the sheer volumetric aspect of the UDP flooding was very problematic. During the first couple of phases of the attack campaign they started out about 57 gigabits per second and about 30 million packets per second and they upped that later on. Second phase they were attacking more banks simultaneously. They had moved almost entirely away from HTTP and this concentrating totally on SSL so they were hitting in a login authentication subsystem and a lot of banks and financial institution you can browse their entire website via SSL which is nice for privacy and confidentiality purposes. You can also do things like download large brochures via SSL, so the attackers were doing this, which put a tremendous strain on the SSL termination points, the load balancers reverse proxies who are whatever they were using, as well as consuming a lot of the upstream out bound bandwidth from these organisations who all host a substantial amount of their web content and their applications, so there was a kind of a dual benefit to doing that.

They also started ?? the attackers started to move down market as well. They started to move towards attacking regional banks and regional credit unions who likely did not have significant IT staff and didn't have a lot of experience in terms of dealing with DDoS attacks.

Phase Three they were targeting six, seven, sometimes eight different financial institutions at the same time. Again, mostly SSL, and these malformed DNS queries. They had really moved down market at this point and they also attacked a couple of institutions here in Europe as well. So they weren't just targeting north American financial institutions.

And then the very abortive phase 4 for a few hours they attacked a couple of institutions simultaneously, same attack methodology, and so far since August of this year, this particular attack campaign has not been resumed.

The botnet was kind of interesting because it was kind back of the future botnet. Typically, we see botnets with tens of thousands, hundreds of thousands, sometimes million of individual bot constituent members and they are typically desktop PCs that are on broadband access networks, mobile networks, on enterprise network. This was a server based botnet and a lot of industry press were saying this was something new and different. That's not true. It's back to the future because back in the early days of what we now all the Internet, what kind of machines were on those networks? They were servers. And you would get a shell and maybe you were into IRC and you would set up an egg?drop bot and run that on your IRC channel to control it and then you would get mad at somebody and CTCP flood him off of IRC. That's where botnets came from from in the first place.

The attackers who compromised these machines compromised them by using search engines to search for strings to search for versions that had known vulnerabilities but were still out there running on boxes, so they would identify targets for their exploits, they would compromise the servers, they did not from route on these boxes they were running in the same user context of the you'll user who were rung the vulnerable content management application and there was a game of cat and mouse going on during this attack campaign. The attackers would recruit more servers for bots, they would use them in the attack, various ISPs and hosting operators would identify them and shut them down and the attackers would of course try to recruit more bots. A low point for this botnet was maybe 3,000 bots, at its peak they had something like 20,000 bots. So a relatively small number of attacks forces.

The code that they were running was php based code and it could do things, it could send gets and posts over HTTP and HTPS. It could again rate UDP, this malformed UDP that they were talking about as well. We saw on a couple of occasions when the attackers would get frustrated that the defenders were able to successfully defend against the server based botnet they would switch over to a more common type of botnet and do things like send floods and things of that nature in addition to using this heavy hitting botnet. So this attack campaign was kind of different to most attack campaigns in that it was announced prior to the fact that ?? prior to the date that the attacks were taking place and whoever this was that was in communication with the attackers would post the specific list of targets that were going to be attacked and what time Windows they were going to be attacked during so they were initially very, very specific about this. Again, this is a server based botnet which we don't see that much of these days. A relatively small number of high PPS, high VPS sources as opposed to a larger number of smaller PPS, smaller BP S sources.

We saw the attackers were very interested in monitoring the efficas of tear attack. They were focused on that. When the defenders would defend successfully against one area, the attackers would try to change it up in order to be able to impact the availability of the sites that they were attacking. They were constantly tinkering with their botnet code as well and would push out pretty constant updates. We were also generally able to infer when a new wave of the attacks was going to start because we would see some wind of preparatory activity when they were tinkering with the bot code and upload it go before the attack started.

They used servers because first of all it was easy to find vulnerable machines by using search engines to look for the strings in these php?based applications. These machines typically have larger amounts of upstream transit bandwidth available so you can get higher bandwidth and throughput per host. They were always it under on. A lot of these machines were in smaller IDCs who maybe didn't have instrumentation to have visibility in their network traffic so they weren't even aware that these customers machines had been compromised and were being used to launch these DDoS attacks. This was nothing really new.

Thee main families of attack code called Brobot, KamiKaze and AMOS. These were taken from comments within the code that the attackers wrote themselves and again they did layer 7 with HTTP and HTTPS and then the heavily lifting. Multiple investigators simultaneously which we don't see a lot of, when we do, those are always a little bit more challenge to go mitigate because it requires focusing on multiple different attack methodologies and impact on different subsystems of the defending sites at one time.

Again, the attackers had done for reconsense ahead of time. And one thing that was interesting was an in Phase Three of these attacks they started trying to attack the actual network infrastructure of the ISPs and management security service providers that were defending these sites. Most of them, it didn't have effect because, but one in particular, one of the very large ISPs who is defending against this attack, they had recently renumbered their mitigation centres where this they would use BGP to pull traffic in the mitigation centre and scrub out the bad and put back in the good. They had renumbered the IP address block but they had forgotten to update that I autobiographic else. They had four different mitigation centres, and one warning when the attack was going on mitigation centre A went down and so they then switched over to diverting traffic into the mitigation centre B and then that went out. They switched over to mitigation centre C and then that went down and then they called us, and they told us what happened. I said it sounds to me like your infrastructure is being packeted. They said we have iACLs and all that stuff. I said I know but it sounds like that's what's happening. They did investigation. It turned out they had renumbered their mitigation centres a few weeks ago but had not updated the iACLs to reflect that and the attackers had stumbled upon this and were able to DDoS the sites that were being protected by this very large ISP for a few ministers by knocking over their network infrastructure until they got the iACLs updated.

So, this was not a hit?and?run kind of thing. It was an extended attack campaign. It took several people to do this over a considerable period of time. They were pretty well funded by someone. The attacks succeeded, as almost all DDoS attacks do because the attackers were largely unprepared. They had brittle, fragile, non?scaleable systems from an application layer perspective, they had little or no visibility in into their traffic. They didn't have mitigation techniques worked out. They didn't have what we would call an operation security team to deal with this stuff. Some of them had contracted with ISPs and MSSPs for DDoS mitigation services but then there had been no customation of those services in order to account for the specific servers if services and applications there were being protected and so there was a lot of unpreparedness on the part of the actual targets of the attack, and on the ISPs and management security service providers who were supposedly going to defend them against this attack.

The ISPs and MSSPs, in particular, some of them had extra mitigation capacity and tools and techniques they hadn't deployed. They hadn't customised their mitigations and counter measures to protect the specific servers and that their customers had, they were trying to use a cookie cutter solution that doesn't work well. Some of them were pre?bureaucratic, they are used to dealing with an attack from the a convention at botnet with a large number of low sources. It was kind of confounding for them at first to deal with a small.botnet consisting of high PPS, high VPS servers. That were actually bots.

The enterprises had their stable firewalls and their load balancers in front of their systems. These are stable devices, it's very easy to knock them over. They promptly went down due to state exhaustion. So when these devices were present, it was very important so protect the virtual IPs. On the northbound interfaces of these devices as if they were the servers themselves. Interestingly enough the UDP 53 packet flood against web servers worked quite well because a number of these financial institutions had not instituted basic policy [ACLs] in front of their web server front end and so instead of restrict it go to you know high port TCP to TCP 80, and ICMP type 3, for PMTUD, that would allow anything through and so this UDP would hit a virtual IP on a load balancer and it would rapidly fill up the state table. The load balancer the load balancer would fall over, so some of the best current practices, the very basic things of enforcing network access policyings, some of these financial institutions hadn't done this. They didn't have visibility in their traffic. They hadn't practised DDoS response, even if they had instituted a set of procedures and had some plans and teams and things for DDoS response they hadn't actually rehearsed them so this was the first time they were trying to do all this was in the midst of a very major attack.

The firewalls didn't help because they went down quite promptly.

The enterprises were not accustomed to an attackers who were really, really focused on the efficacy of their attacks and changing attack methodologies, they had to get used to that. There was even one instance in which this DDoS attack caused a database problem, a couple of banks on the west coast went down as a result of these attacks and they came back up with limited services and one of the services that was not available was login to online banking. And what we believe happened was that when these attackers were hitting the login subsystems so hard and so continuously that it caused problems on the middle tier or back end databases that contained the user access credentials, they fell over, corrupted, obviously didn't have a high availability strategy and recovery strategy. So this is an interesting case of where an attack against availability seemed to actually cause a problem with integrity.

And so the main take away for enterprises is that most of them hadn't thought about DDoS attacks and availability problems in general or if they had done so, they had really only paid lip service. This was a real wake up call for them.

Almost spending and attention that's paid to on security is on confidentiality and integrity because it boils don't in most case to say one form or another of encryption. Availability is hard. You can't fake availability. You can't say, oh, well, I'm PCI DSS compliant, I have this anti?virus, that network agent and I'm all good. You can't fake it. Either the website is up or it's down. The DNS server supply or it's down. Availability is hard. And that's why a lot of people, I think, don't spend a lot of time on it because they see it as a very difficult problem.

It's difficult. You have you have the right people. You have to have the right processes, the right relationships. And so forth. But we're not doomed. The story of these attacks is that initially the attacks were very, very success. . The enterprises couldn't deal with them. The enterprises were being attacked. And the some of the ISPs and MSSPs that were trying to defend against the attacks didn't do such a good job either. But over time these organisations evolved, operationally, they figured out how to be agile in their response. They figured out how to short?cut bureaucratic communications channels so that they could get the routing team in touch with the security team in touch with the web masters in touch with the DNS admins and work together as a cross functional virtual team. They learned that when they were going to offer DDoS mitigation to one of their customers that they had to take the time to figure out what kind of servers the customer had so they could make the right choices and use the right tools and methodologies to defend those specific servers and services.

And we learned, finally ?? I work for a vendor of security solutions, we sell boxes we don't rent things, we sell boxes, and as a security vendor I am here to tell you that 90% of real security does not consist of stuff that you buy. It consists of things that you do. That's why it's hard. Writing cheques or cutting PEOs is easy. But achieving real measurable metricable security requires brains and a lot of elbow grease and that's why we see so little of it on the Internet today.

That's all I had. Questions, comments?

SHANE KERR: Thank you, I thought that was a very interesting presentation. Do we have questions?

AUDIENCE SPEAKER: Ralph, a question on the DNS part of the attack. Was it a reflection amplification attack or was it actually coming from the kind of bots the malformed DNS request?

ROLAND DOBBINS: The question was the DNS aspect of the attack reflection amplification attack or was it UDP like packets admitted directly from the botnet. These were UDP packets admitted directly from the botnet. This was a high BPS attack started out with 57 gigabits per second, topped out at about 100 gigabits per second in phase 4. 30 million packets per second in phase 1, about 40 million packets per second in phase 4, so it was very high PPS, BPS. But there was not DNS reflection amplification involved. These were malformed DNS packets that were fired at web servers. Now, during the latter part of Phase Three and into phase 4, what we did see that was interesting, was we saw SYN flooding on TCP 53 against authoritative DNS servers, but these were not the authoritative DNS servers of the institutions that were being attacked. These were the authoritative DNS servers operated by their ISPs. And so we believe that this was a diversion re tactic where the attackers were trying to basically force the ISPs and MSSPs to spend time defending their own DNS servers to take resources away from defending their customers. But networks there were ?? this is an uncharacteristically large attack that did not involve some form of UDP based reflection and amplification.

AUDIENCE SPEAKER: What was the mall form in the kind of packet, did it trigger any kind of bot in the system or was it ??

ROLAND DOBBINS: What we see with most of these attackers is that most of them don't know a lot about TCPIT and they do strange things and sometimes these things there is a certain logic behind, it sometimes they are not. This particular set of mall form UDP, these were large packets, larger than any you would normally see. Started out about 13 hundred bytes, and by the later phases they had moved to 1403 bites in size. They started out with aspects of a AAAA query and then an IPv4 packet, but they didn't have all the rest of the normalised DNS fields so we don't know exactly why they chose that. They might not have known why they chose it but the lesson here is that even though what they did was very stupid, doing these weird malformed DNS packets and hurling them at web servers, initially it worked. Because the defenders were so unprepared. So good question. Thank you.

AUDIENCE SPEAKER: Andrei Robachevsky, internet society. Do you have an indication of the impact of this attack?

ROLAND DOBBINS: Like financial? We don't have direct numbers, but in a separate related event that's running in parallel with this conference, somebody mentioned that some of European banking decided that online bank something not that important to them. I can tell new Asia and North America the online banking properties are more important than the brick and mortar actual bank branching at this point because people do everything online. They pay their bills online. They get their pay cheques, they have to validate things and so, for these financial institutions and a couple of European institutions who were attacked, we know it cost them a lot of money. It cost them money in terms of opex, it cost them some intangible amount of brand reputation, a lot of their customers were really angry. In the case of one regional bank that was attacked briefly in phase 4, this bank had conflated the Internet transit that was used for their online web properties with their ATM cash machine systems that were tunnelled via IPSIC over the Internet as well as their credit card verification, they got hit and they got hit, their infrastructure wasn't protected so the attackers attacked the network infrastructure, knocked it over and for several hours on two successive days, hundreds of thousands of customers of this particular bank could not pull money out of cash machines, could not pay for petrol when they were at the garage, at the could use their credit cards in department stores. So we don't have a specific quantity for it but we know that the op ex was huge, that there was some amount of help desk and other expenses as well. We know that the brand reputation took a hit. We don't know overall what it cost, sorry.

AUDIENCE SPEAKER: I have not so much asking for financial indication. I know it's hard. But in terms of availability, I mean, like how much time? How much percent of services were not available?

ROLAND DOBBINS: So the attackers would ?? if the first three faces they would announce their targets ahead of time (phases) and the Windows in which they were going to attack and you can look at one of the several various web availability services that were out there on the Internet that you can get to and you can compare the list of financial institutions that the attackers said they were going to attack on their pastebin.com post versus those who suffered down time. So that information can be publicly derived to some degree using public sources. There were a couple of financial institutions who basically didn't lose a single packet almost during this. They were financial institutions who had been hit before and had invested in the people and the training and the resources and rehearsals to be able to defend against these kind of attacks. So you can use the list of sites that were going to be attacked combined with a couple of publicly available website availability indexes and you can kind of infer that.

SHANE KERR: This is going to have to be the last question.

AUDIENCE SPEAKER: Mikael Abrahamsson. We discussed these kind of things before. Do banks and so on have the equivalent type of operational forums where they can share best code and practices?

ROLAND DOBBINS: That's a very interesting question. It's a very good question and thank you. I'll send you a cheque later. So, the problem is that ?? a big part of the problem is that even though these financial institutions have online properties and also you know back in end stuff like cheque validation and things like we as consumers don't see that depend on the Internet. They typically, most of them are not engaged with the global operational community. There are some who are. They are the folks who had been through this kind of thing before. And they had staffed up and hired people with the right skill sets. Made sure they participated in NANOG and RIPE and APNIC and all that and had joined some of the closed vetted operational security groups as well. And those were the financial institutions that did very well. You know, just like we have seen, there are organisations who are very active in these various groups, like NetFlix, for example, they are a content provider, right, a very big contend provider. Well these banks are ASPs essentially and online trading houses and so they are very much like an eBay, like a PayPal, like NetFlix in that regard. They should participate. Most of the ones that were affected were not. We have seen a great interest in participating since that time, some of these institutions have started to build the right kinds of teams who have the right skill sets and relationships and so we're starting to see increased participation. They had a back channel through the FS?ISAC, which is sponsored by the US Government, but they were sharing lists of IP addresses, 200,000 IP addresses that had nothing to do with anything they wanted to mitigate, so a lot of that stuff was not effective at first T; it grew more effective later on.

AUDIENCE SPEAKER: Do you feel similar requirements from PCI and so on when it comes this kind of mitigation?

ROLAND DOBBINS: Something I didn't call out, but it's in the slides here, is that there is not currently an availability component to the PCI DSS and I personally believe that that's a great oversight because Visa and Mastercard's revenues are directly affected if they are constituent member sites are not able to process credit cards and those individual retailers themselves are negatively affected when they are not able to process credit cards or take orders over the Internet. So, I think that this is a tremendous oversight of PCI DSS that availability is not included and would I hope at some point that that would be rectified.

AUDIENCE SPEAKER: Do you think that they will come to these kind of forums before they write the standard, is it something they are going to hit at ISPs out of the blue?

ROLAND DOBBINS: You mean like the Mastercard, Visa people or ??

AUDIENCE SPEAKER: They are basically going to say all the bank in the world that are doing this, they have one auto year to comply to this and this standard will come out of nowhere for us?

ROLAND DOBBINS: Some of the banks do a do attend NANOG in North America. I don't know of any financial institutions that attend APNIC or APRICOT and I don't know of any financial institutions who attend RIPE, I hope I'm wrong which the way. What we are seeing is greater engagement in some of these closed groups like, us an err had an online crime symposium meaning this last April or May of this year, and there was a lot more talking about the need to engage with the operational community. So, I think it's just going to be a long process. But I think it's started. I'm sorry I can't give you a better answer than that.

SHANE KERR: Thank you very much.

(Applause)

SHANE KERR: All right, our next speaker is Geoff Huston, and he is going to be talking to us about some research he's done into a topic near and dear to my heart, DNSSEC, so he is always a fantastic speaker and I think this is part of an ongoing series of studies that he is doing and I think he has developed a really awesome technique, I talked to him yesterday and I said it's kind of like gal layey discovered the telescope and pointing it out all these things in the sky and discovering all of these wonderful things that no one knew anything about.

GEOFF HUSTON: Thanks for that and good morning. I play at APNIC and sometimes I play with the DNS.

I was at a meeting of the RIPE folk a couple of years ago when we all decided that it was time to send a strongly worded letter off to the IANA going we should sign the route of the DNS with DNSSEC signatures because it's just worth doing now.

And of course the letter was sent off and things happened and the route of the DNS was signed. This is cool and I have seen a number of presentations looking at how many zones have been signed with DNSSEC. And again, that's cool. There are numbers all over the place. Then I thought, you know, how many folk use it? How many of you guys use resolvers that do the DNSSEC advance? You might think you do. But we can figure this out. Because there are some real questions around there that I thought were kind of cute. Who is actually using DNSSEC validation? What's it costing you? What's it costing the servers? What's the additional query load that comes through? And you know, for the perverse minded of us, including me because you know I'm perverse, what happens when you stuff up your signatures? How much will the DNS just constantly thrash about going there is a right answer here somewhere if only a queried this chain in just the right way. We thought these were pretty good questions. So then we figured out how do we do this experiment? What we actually want to do is get all of you ?? well, a representative sample ?? to actually do this test. So the best way to do this is to bed it down into something like Flash, because that's a good language, and the reason why Flash is a good language, it's not, it's a crap language, don't use it ?? but the reason why for our purposes it's a better language than anything else is because Google support it in their Google online ads behind images. If you want to make the image bounce around the screen or something like that, do you this in Flash. So, we just did this in Flash in an ad that we hope was as bland as we could possibly get it, because the other thing about Google and their ad system is you only pay them if the user clicks. If you see any ad that even looks like mine, don't click on it because I don't want to pay them money if I don't have to and I get more impression. So this code just executes as soon as it comes up on your screen.

So, you know, the DNS is kind of interesting. And you know, the view that most folk have of the DNS is you know I'm the client, there is the resolver. There is the name server you just bang through the question, the question comes back, isn't life wonderful? No, it's not. Because almost everyone decides that their favourite version of the perverting DNS resolvers is the best. So this is just a small subset of the weird ecosystem which is the DNS resolver path, and you know, this is just a cloud of crap, you can't see inside it because queries have no history. I have no idea where things are. And I'm there over on the right. I am the server and you're the client. So, what I really want to do is map you ultimately to the resolver the query is made. That's what I'm trying to get, so that's what I'm trying to combine, so I can't talk about all resolvers, I can't see them. I can only see the resolvers that end up doing the queries. And it's really hard to talk about DNSSEC because sometimes it's the person behind the resolver that's doing it and the resolver you think is doing it is actually not doing it at all. So it's a lot easier to talk about end users. It's always easier to talk about end users.

So the answer, we did this in May and we co?opted an unsuspecting 2.5 million folk all over the network, my thanks to them whoever they might have been. And fascinatingly, around 8 percent do DNSSEC validation, really do it. Now, contrast this for a small second with a number of folk who do it v6. 1.5 percent. And we're busy patting ourselves on the back isn't is wonderful risks is everywhere, DNSSEC is actually more everywhere. That's an amazing number. But of course if something isn't well signed, you get back server fail and if there is one thing that DNS resolvers like, it's giving you an answer. So, you normally have a few resolvers in your resolved dotcom file and if the first one sends back serve fail because the signature is busted, you don't take no for an answer. You go and ask the next one and the next one until you get an answer. And 4 percent of of you do that and flick from DNSSEC signed and validated into hell, I don't know about this DNSSEC crap, I'm just going to give you an answer anyway. So another 4 percent of you do that. God knows why you bother. And the other remaining 87 percent just only ask A records.

So, where are you? Your IP address tells me where you are and that's the list of countries ranked by that first column there which is the percent of clients who are actually using DNSSEC validating resolvers. So Sweden is just right up there. 77 percent of the folk in Sweden end up using resolvers that do the DNSSEC validation dance. That's cool. You can read the list as well as I can and that's a pretty cool list. Vietnam, Jamaica, Barbados, Ghana, etc. And the occupied Palestinian territory. This is bizarre. This is not GDP?based. This is sort of some random metric, and I'm kind of curious about that ?? I'll show you an answer. I'll show you my answer. You know, how do you get a map that looks like that because that's not a formal map. Where is France? Where is China, etc.? There are some strange countries up there that are doing DNSSEC. And so I got to think. That maybe it's not you guys at all. You are just taking the lazy way out and a whole bunch of you have simply aimed everything you see in the DNS to all 8s because that's Google's problem and they can just do it anyway, and Google, in March, announced that they were going to do DNSSEC validation.

So I looked at the resolvers and the clients. What's Google's market share of DNSSEC validation? Well, I can go further. What's Google's market share of DNS resolution? 7 percent of the world sent their queries to Google. I don't know about you, but if I had a realtime view of everything 7 percent of the users of the Internet do all the time, there is nothing you can tell me I don't already know. Nothing. I know everything. Because 7 percent of market share is any good statistician will tell you is the lot. You know everything and Google have 7 percent market share.

Now, 5 percent of you simply take that answer and go that's cool, that's all you are ever going to do and another 1.9 percent say serve fail, I'll use a different resolver. God knows why and the other 92.8 percent don't use Google.

So now I can go back to this list of countries and find out who is using Google. Vietnam. 96 percent of the folk who validate it use it through Google. The list is there ?? Brazil, didn't they have a problem with America? But there is a huge amount and the occupied Palestinian territory. I'm going that's a bit curious. And I can even tell you networks because origin AS is just as easy as country and "kom hem" in Sweden ?? does that mean come home? ?? 98 percent of their clients do DNSSEC validation. They have got the users, they are validating. That's cool. Number 2 is in Columbia. You'll notice Linkem Spa in Italy send all their customers to Google. Because I suppose it's easy. And, of course, the VNPT in Vietnam does exactly the same. I suppose it's cheaper. In Azerbaijan, they send all their customers to Google. This is curious and I'm getting a bit of interested now. So in May, around 5 percent of the world use Google all the time. Another 2% used it some of the time. And then something happened. Because this guy sort of wandered off to Hong Kong with a whole bunch of revelationings about high spying was institutionalised etc. And things we didn't know. And I thought did that affect people? It did. Google's market share dropped. And I'm going, wow, can I see that? Well yes, because I did exactly the same thing in September and I get a subtly different list. So who turned it off? Well, let me tell you. Nick ago a had a problem with this. Understandably the occupied Palestinian territory, Bolivia, you can read the list. Those who the folk who said enough is enough, let's just turn this off, we are not going to use these guys any more.

I'd be amazed if the US was on this list but it's not obvious, so that's cool. Who turned it on?

Well this actually increased their use of Google over the same period, which again kind of surprises me, but there you go, including Argentina.

I digress, because I was talking about something else. I was in DNSSEC. That was sort of amusing.

I'm doing DNSSEC validation, yes? And we are kind of trying to figure out if you do this does it take a huge amount of time? The problem is measuring time for clients is really hard because you sit there and I'm sitting on a rented machine in Dallas because it's the cheapest rented machine on the globe, and I sort of look at the rest of world and understandably the further away you go from Dallas the longer it takes to get there because that's just geography and physics etc.. So absolute measurements don't make a lot of sense here so I start looking at relative measurements. I look at the amount of time it takes for you as a client to ask me the DNS and you ask me a whole bunch of stuff and you have a good time and then you asked me that HTTP get and the theory goes that when stuff isn't DNSSEC signed I'll take that as a unit of 1, 1 unit of time, whatever it takes and because you are asking me a number of things, one is one signed, one is signed, one is badly signed, that signed and unsigned stuff should take longer so I should get a graph that looks exactly like that, whereas I get a graph that looks exactly like that. That's stuffed, isn't it? So that technique just doesn't work. Why? Because Flash really is a crap programming language. When you say do A, B and C, Flash just kind of goes oh, bugger it I'll do anything I want. So, that didn't work. So, maybe I should do this a bit differently, maybe I should use a few more colours and do that few more graphs like that. Apart from seeing that most versions of DNS resolver libraries have a one second timer because there are packet loss at one seconds, even that just isn't that informative and there is some noise down the bottom that I still don't understand.

Let's try something different. Let's try cumulative time distributions. Fascinatingly 20% of the world have a problem resolving the DNS. That top blue corner up there says 20% of the world can't even get an unsigned name with one query. What a fantastic Internet we have. When one in five can't even resolve a name. I mean, this is busted. So the unsigned doesn't work. The signed kind of works, the badly signed works really badly, as you'd expect and even if you look at that first half second, that blue line at the top is more interesting than all the other lines. Standard DNS doesn't work for a huge number of folk. That is so weird.

So what can we say out of all of that? DNSSEC takes longer. So what? And DNSSEC that's badly signed takes even longer. Yeah, okay. So, there are a few other humps around that that I am still curious about. But the basic thing is, it takes longer. We all know that. What about the other side? I'm serving a domain and the guy, whoever is running it, or girl, decides I'll going to sign it. How many more traffic are you going to get? So, the theory again, and the practice is, if I'm just serving a domain I should get one query. If I'm serving a DNSSEC domain, I should get three queries: the domain name, the DNSSEC and the DNSSEC key records. If it's badly signed you will give me more queries.

I started looking at the query counts, and out of those 2.6 million experiments and started looking at who queried me for what. After a bit of mucking around, the basic thing is, if you sign your name, you can expect around six times the query level. If you are getting a 100,000 queries a day, expect 600,000 queries a day if it's signed. If it's badly signed, folk hate taking no for an answer. And they don't just double it, they go more; you can expect to see around 15 times the query load. Okay. So, the authoritative server certainly take a roasting out of this. There is a huge amount of additional load.

But let's also look at traffic. Because, of course, a query in an answer is 160, 200 octets most of the time, but DNSSEC decides to give you a whole bunch of signatures, so instead of a couple of hundred octets, you can look at 1,600 octets in a full DNSSEC transaction. So the blue is unsigned, the green is signed, the red is badly signed and life gets bad. A whole bunch of maths later and what you tend to see is that 13 times the traffic, if it's well signed, and 31 times the traffic if it's badly signed. If your signatures stuff up, you can expect a torrent of traffic.

So, if you are planning to sign your domain name, you need a lot more server "foo", because you have got to look at metrics that say as long as everyone manges their signatures brilliantly and everybody manages their signatures to well, you are going to need at least 15 times the capacity, if they stuff it up, and they will, you will need about 30 times the capacity.

Could be better. Why? Do this this I have deliberately defeated caching, everything comes to me because I want to see you, and maybe caching would make it better, maybe.

On the other hand, maybe caching doesn't help, because when stuff is badly signed, you don't cache bad. You tried very, very hard not to cache bad. You try to say that's bad, I'm going to forget it and I only used one name server and, when things are bad, some resolvers like to query all of the name servers in all of the parent zones, all of the time, and things get pretty bad pretty quickly. It could be worse than this.

Where are we? Something to think about. A couple of hundred octets of query gives you a couple of kilobytes of octets of answers, all using UDP. The folk who did this stuff have been there long before you and this is really now commonplace. Is the 15?year?old BCP 38 standard have any traction with you guys who are operating networks? Absolutely none. This isn't working. Do we think about DNS server TCP again? How many actually allow you to do DNS over TCP? What's the failure rate and what's the impact of the authoritative server? We will talk about this in the DNS Working Group later this week.

Google and market share: 1% of visible resolvers do 58 percent of the market and one in particular, and that family do 9 percent of the market. This is a very, very dense market place. And the other thing, too, about this is the tale is bizarre. There is an awful lot of really, really old resolvers out there creeking along.

I actually think the standards in DNSSEC are busted. Surf fail is just a stupid answer. The signature is invalid so I'm going to claim as a server I am completely lost. It's wrong and the client says I'll just try another server then. That's totally the wrong answer. The DNS got DNSSEC validation signalling kind of wrong I think and we could have done something better.

There is talk in IANA of rolling the route key. You should get a little bit worried about this. Because they say it's okay, RFC 5011 says even if you are not working closely it will just work and everyone has upgraded their DNS resolvers up to the latest and greatest. Nobody asks for A 6 records do, they? There is an awful lot of really old shit inside the DNS resolver population and rolling the root key brings it out and is going to make it behave really badly. We should think about this.

The other thing too is the DNS is remarkably good at generating completely unused nonsense. Because even though only 8 percent of you even try to do DNSSEC validation, 80%, ten times that number, turn on okay. Give me me kilobytes of data, I'm not going to do anything with them. Why? Why? I have no idea why, you know, it comes by default shipping all these keys around and then just ignoring it. So, it's just weird. And the other thing, too, is I see more queries with the DNSSEC OK bit set to the domain signed zones. This is spooky. This is the DNS knowing in advance that the domain was signed and sending you the DNSSEC OK bit. That's weird. I mentioned Google more than enough. If I haven't hammered the message home by now, I should have. The DNS is a remarkably good piece of information about what you and I remember doing.

And that's about all from this. Thank you very much.

(Applause)

CHAIR: Thank you, Geoff, and what a wonderful tool the advertising industry offers us and what a great research.

AUDIENCE SPEAKER: Mikael Abrahamsson, I wanted to say about Sweden being up there is thatted Swedish ISPs were paid by .se domain entity to actually implement the DNSSEC resolvers, this happened five or six years ago so that is why it's 80%. So, basically, if you want something done quickly, pay the ISPs is what I can take back.

GEOFF HUSTON: Are you listening IPv6 folk? Are you listening?

AUDIENCE SPEAKER: So on that note, that's actually part of that is actually happening with ?? Goddess is driving that as well. So I think ??

AUDIENCE SPEAKER: SIDN. Geoff I have sign this presentation a number times and it's evolving I like that, but ever since I first saw it I had some doubts about the figures, and the reason for that is because when I see the list, I expect some correlation of countries where DNSSEC is actively promoted and countries where well we don't see any DNSSEC deployment. And I see some ?? a bit of an inverse feature here. So, the question I have: Have you ever correlated your measurement method using Flash against the operating systems that are deployed in countries that don't support Flash?

GEOFF HUSTON: The first thing I should note, I suppose, in comment, is: this industry doesn't listen to us an awful lot. No matter what we say, the folk who do things, it's just a different crowd. But then this issue of Flash and not Flash. In some ways, all Flash does is deliver the URL in the browser engine and at that point get host by name and DNS resolution takes over and it doesn't matter what language you use to load that URL into the browser engine. So, in some ways, Flash is relatively agnostic. There is only one sector where this accounts, and that's Apple, who have consistently in their IOS products said Flash is evil, bad, whatever, not going to go there. But thankfully Android haven't. So, in the mobile industry I under sample that, right, but the issue is then, I don't see them as not doing DNSSEC, I don't see them at all. They just don't get counted in the numbers so it doesn't bring the percentages up and down it's the same percentage relatively. I'm just not counting i?phones, we have played around with this in Java. The problem is trying to get JavaScript inside ad networks, those ad networks are nowhere near as prolific as Google's. Google can deliver hundreds of millions of new IP addresses through an ad campaign, very few other ad management systems have that reach. So if you want to sample all of the globe, you kind of restrict it to a very small set of folk and basically then you get into Flash as being the only way of doing it.

AUDIENCE SPEAKER: But, a second question if I may: So, if you would make a correlation of where DNSSEC is deployed on an authoritative level and where most validation is done, wouldn't that be some evidence whether or not DNSSEC would work because if there is only validation done in those countries that don't visit authoritative servers that do DNSSEC, DNSSEC would be a failure, right?

GEOFF HUSTON: Well, understanding which users go to which domain names is probably a question best directed at Google. They know a lot more about that than me.

JIM REID: Great talk, Geoff, as usual. Thanks. I have got a question and an observation. The observation first is you talked the DO bit setting. As you probably know and I think hopefully most of the people in the room know is this is a feature of BIND's implementation. It sets this bit whether or not using validation or not and that's been in BIND 9 now for a number of years now. It's a strange way of going about things, but hey, that's where we are.

Anyway, my question is about you mentioned earlier that whenever an authoritative server has got DNSSEC switched on its query goes up by a factor of 4 or so. Have you any insights into why that's happening?

GEOFF HUSTON: I'd expect the query rate to go up a certainly amounts because it's actually got to answer the DNS key RR, as well as the original RR, so I have got double the query rate. It may be also doing the parent. That's why the DS as well as the DNS record comes in, that's why the query rate starts to go up. But the next issue is the extended amount of time to do those additional queries and validation before reporting back to the user down that DNS resolver chain, often the user is incredibly impatient in their software. They are going I asked, I asked, I haven't got an answer, one second has elapsed I'll ask again so. They are filling up the pipeline with queries because the original one is taking sometime for validation before it comes back to the user. It's that combination of using additional queries with additional resource records and users running these days remarkably aggressive time?out values in their own resolver libraries that simply add a few more queries into the chain, as far as I can see.

AUDIENCE SPEAKER: I would have thought those resolver libraries would be talking to the local resolver name server. Anyway, I think it would be interesting to look into this in a bit more detail.

AUDIENCE SPEAKER: Randy Bush, IIJ. Just note that impatient user just upped her likelihood of getting a monkey in the middle DNS attack.

GEOFF HUSTON: That's very true. Yes.

SHANE KERR: This is Shane Kerr from ISC. I was involved with BIND for a while. So, as far as why the DO bit is set, even though there is no trust anchor set or anything like that, that's based on a long ?? an old interpretation of the specs which basically say somewhere that in the DO bit just means you won't fall over if you get DNSSEC data, so we turn it on based on that interpretation. And while it is possible for to us change that, it's been that way for more than a decade, so...

GEOFF HUSTON: While nobody had the domain signed, it was kind of innocuous in some sense, Shane and I was just looking at the change that happens when all of a sudden it's signed and that the DO bit actually triggers a bunch of data coming back. That's what I was looking at.

SHANE KERR: There have been people that organised that we should only set if indeed when he have a trust anchor and to me that's a very compelling argument. It hasn't been super high priority, but as we see more DNSSEC adopted, then it may be something we need to revisit. And this next thing may be a little too detailed for this audience but I'll go there any I way. There are actually two separate timers inside of, I think, most recursive resolvers and certainly BIND 9. The resolver will continue to resolve the name even applying back to the stub resolver saying we can't figure anything out so that the next time somebody queries it's available.

GEOFF HUSTON: I have noted this behaviour.

SHANE KERR: And additional queries won't cause additional load because servers collect similar queries. If you are impatient and you send the same query again, it won't cause the resolver to act any differently, except it will maybe send you an extra answer, so.

AUDIENCE SPEAKER:

JIM REID: Just to come back to Shane rather than to Geoff, it would be very nice if the DO bit had some kind of configuration option. It's not as if BIND 9 doesn't have any configuration option. Basically, it's a question of toggling that behaviour. Just a question.

(Applause)

CHAIR: Let me invite our last speaker for this session with a presentation about fly?by spamers and how they use the routing system and how they are going to thread them. Pierre Antoine Vervier.

PIERRE ANTOINE VERVIER: So, hello everyone. Today, I will be talking about spamers abusing the Internet routing infrastructure to send spam from stolen IP space. And so this will be about this tool called SpamTracer which we developed.

This work all started from this conjecture, that spamers would basically use BGP hijacking to send spam from stolen IP space in an effort to remain untraceable and also hinder traceability. And this was described a few years ago, now in 2006, 2007, in two research papers, and in those papers, they the author described shortlived routes correlated with spam and shortlived in this case was lasting less than a day.

Next to that was anecdotal reports on mailing lists, but nothing more than that.

But well the potential effects of such spamers is that, well, first we can message misattributed attacks launched from hijacked networks, basically due to a hijacker stealing IP identity and also spam filters heavily rely on IP reputation as a first layer of defence. It would seriously affect their effectiveness.

When we decided this work, one question we wanted to ask is, we wanted the answer was, this fly?by spamers, is this a real problem or is it just some kind of myth? So, just briefly some words about BGP hijacking. I know that everyone here is pretty familiar with that.

Basically, it's caused by the injection of erroneous routing information into BGP and it's possible because there is still currently no widely deployed security mechanism to prevent this. So there is this router original nation which is more and more deployed but doesn't solve the whole problem also.

The effects: Well, we can have either black hole of the victim network or the attack with act in the middle.

For the explanations, well, we know that BGP hijack occurs ?? well, BGP hijacks occur regularly in the Internet. But most of those incidents can be attributed to routing misconfigurations or operational faults, not to mention this famous case of the hijack of YouTube network by Pakistan Telekom. What we are interested in to see if malicious BGP hijacking perform operation of other management activities.

And so, our objective was really to validate or invalidate on a large scale this conjecture about fly?by spamers and if it turns out to a real prevalence or not.

So do that we have developed a tool called SpamTracer, we try to extract normal routing behaviours to detect possible BGP hijacks.

The assumption behind SpamTracer is simply when an IP address block is hijacked for spamming, then routing change is observed when the block is released by the spammer to remain stealthey.

And so the method that we use we simply connect BGP routes and IP/AS routes to spam networks just after spam is received, usually within an hour, and we do that several days after the spam is received, and then we look for the routing change from the hijacked state of the network to the normal state of the network.

So, just going for the the system architecture, so the input of the system is a live spam feed which comes from spam traps maintained by Symantec data cloud. We also look for spam coming from bog on IP prefixes because the position generate any traffic, it's automatically suspicious to see spam from those, so we extract the IP addresses of spamers from this feed and then we monitor each network for one week after spam is received, and so we collect IP race routes and BGP routes, the IP is used performing live BGP feeds. Well from all this data, we extract some BGP and traceroute anomalies that we use after which to identify hijackings.

So, now the results: So a detailed analysis of data collected from January to July 2013, led to the identification of 29 hijacked address blocks. As you can see the number of events a lot across the months.

In the observations of this ?? in the response about fly?by spamers in the research papers I mentioned earlier, they were talking about events lasting at most one day. In the observed cases, we can see that there was at least one event lasts up to 20 days. With most of them lasting less than five days.

So, from the observed cases, we extracted some kind of hijacked nature. So, in this signature basically the hijacked networks were dormant address blocks, that is by the time they were hijacked, these networks had been left idle by their owner, they were also advertised for a short period of time. They were advertised from an apparently legitimate AS but via a rogue upstream AS and this, kind of signature, was already introduced in a previous RIPE presentation, and here basically, well, you see that it actually applies in practice.

We identified the idle intervals, the times the address blocks were left idle before they were hijacked.

Hijack durations between one day and 20 days, mostly less than five days. Interestingly the rogue upstream ASes used were hijacked too.

To further illustrate the routing and the spamming behaviour of those fly?by spamers, we can see here a few case studies, a few address blocks, and so we can see in the figures, the blue lines correspond to the time period the address blocks were routed, and the coloured dots correspond to the amount of spam received from those networks. For example, we can see that the first address block on the top of the figure was routed only one day during which about 2,000 spam mails were received at our spam traps.

So, this figure really highlights well first the strong temporal correlation between the suspicious BGP announcements and the spam. Also, it highlights that while the shortlived nature of these BGP announcements. So, we received also with the spam feed, the name of the botnet that is responsible for sending the spam based on known botnet signatures, and interestingly, all the spam that we received from those networks didn't match any known botnet signature. Which actually, well, confirms our assumption that well spam bots are compromised machines that are already hosted on sub?networks and so it doesn't really make sense to observe bots on the hijacked network.

Lastly, we also observed a lot of scam websites advertised in the received spam mails that were actually hosted on the hijacked networks, so this kind of shows that the hijackers ?? the spamers really took full advantage of the networks under their control.

In order now to assess the impact of spam coming from those shortlived hijacked networks, we extracted the DNS BL records for the hijacked address blocks in the UC protect DNS BL, quite interestingly we can see out of the nine address blocks for, that we considered in this case, only two were blacklisted, and you can notice that the blacklist entries only expire after seven days. This is why we see some blacklist entries even after the address blocks stopped being routed.

What can we say about the stealtheyness of these spamers? Basically out of the 29 observed cases, six of them were actually listed in the use protect, which is not allowed. 13 of them were listed in the Spamhaus drop. This drop is supposed to list hijacked address blocks even though I won't go it known about their exact listing policy, but still from that we see that there is still room for improvement.

And the last thing is that well, no network was hijacked more than once, so, apparently they don't need to hijack networks more than once.

So, from all this, we can conclude that fly?by spamers seem to manage to remain under the radar.

So, what can we say now about the networks that were targeted? Well first we observed that all the networks that were hijacked were assigned, that is no bog on prefix and they were assigned to a different organisation. So, no special organisation was particularly targeted, it was just every time a different organisation.

Out of these 29 organisations, though, 12 were found to be dissolved or very likely out of business.

And 17 of them were found to be still in business or well, no conclusive evidence of them being out of business could be found.

It's sometimes hard to determine if a company is still in business, but we clearly identified some organisations which were clearly out of business and some which were clearly still in business. So, what we can see from this is that, basically, fly?by spamers seem to simply target dormant address blocks, regardless of their owners still being in business or not.

So, far we have looked specifically for shortlived hijacks because this was what was described in the reports in the research papers. Also spam tracer was designed to study and detect those specific hijacks, this was basically do one week after spam is received. What can we say about long lived hijacks in we know that this happens. Well I have just an example case, which we were link telecom hijack which basically lasted five months but the problem is with these hijacks is that, while they are less straightforward to detect, while an important suspicious feature in the fly?by spammer cases is the shortlived nature observing a network being routed for a very short period of time during which spam is already very suspicious. But when you have a network that is routed for say weeks, months or even years, that sends spam ? well, that looks more like I would say usual junk network, so, it's less straightforward to detect.

Also, the thing that it seems to defeat the assumed purpose of evading blacklisting because when you have a network that sends spam for say weeks or months or years, it will eventually appear in many blacklists. So we currently are working on a framework of these cases, mostly by extending the monitoring period of the networks.

So now just how can we, in practice, defend against these fly?by spamers? Well, in the observed cases, basically we saw that spamers did not tamper with the origin of the address blocks making them look like they were used by the legitimate owners. Instead, they advertised them via rogue upstream ASs, so, there is this BGP architecture which I think is the most promising one, and well the thing is that to protect against fly?by spamers, we need both secured route origination and secured route propagation, it's good because this is what this architecture is supposed to provide. So, the secured route origination is more and more deployed but the problem is that security route propagation is still at quite an early stage.

So, the solution for now, well, encourage the following of routing best practices. It's a bit obvious and it's maybe not easy to do in practice, but it's still something that could help prevent and mitigate some hijacks. And also the second solution is basically to use detection systems to mitigate the effect of these attacks, for example by feeding IP based reputation systems with hijacked address blocks.

So, what we can conclude about all this is that the observed fly?by spamers' cases show that this really happens in the world. This is really a problem. Even though it does not currently seem to be a very prevalent technique to send spam, compared, for example, to spam botnets.

However, it is important to detect those attacks because hijacking address blocks hinder the traceability of the attackers, and this is basically important because this can lead to misdistributing attacks when responding, with possibly legal actions. I know there is a lot of effort now to, I think in the Anti?Abuse Working Group, to improve the reliability of IRR records, to be able to identify and being able to contact the network owners in case of well abuse complaints. Fly?by spamers completely defeat all this because they steal the networks, so...

Now, about the perspectives:

Well, our objective is to provide some kind of interface for the network operators to query identified hijacks. Our goal is really to kind of share this information with the network operators, to help prevent and mitigate those hijacks. And there is also an ongoing collaboration with these partners to build a more comprehensive system to detect and investigate malicious hijacks in general.

That concludes my presentation. Thank you very much for your attention, and now it's time for questions and hopefully answers.

SHANE KERR: Thank you. I'm going to insert myself at the head of the line here and ask a question.

Just, do you think this is a problem that's likely to get worse or better in the future? Because I can see with the technologies that you talked about, route authentication and things like that being more widely adopted. It could lead to this being a more difficult technique for the attacker: On the other hand with the exhaustion of IPv4 space and it getting traded around a lot and records being mixed up, maybe it gives ?? maybe a richer area for attackers to use? What do you think?

PIERRE ANTOINE VERVIER: I think that first regarding the security mechanism, well, I think it's going to take really a long time before they ?? they can really protect the routing infrastructure. So I think the attackers that still say for probably a few years. There is also this problem of less and less IPv4 addresses being available. I think, maybe that when people really run out of the addresses they will start, you know, maybe reclaiming all the address blocks and maybe this would remove some address blocks that now are just freely available to the attackers to use because as it was presented in earlier, there is still quite a big address space that isn't used and hijackers can simply, well, choose in this space. So, maybe that's ?? this will change in the future, but I think that this will probably increase, this phenomenon will be more and more important, so I expect to see more and more hijacks in the future, at least in the near future.

SHANE KERR: Okay, yeah you're right.

AUDIENCE SPEAKER: I have so many questions for you, I'll limit it to two or three, maybe. One of them is have you noticed or have you tested this for hijacks with team used address space, so more specifics, or if not, is that something that you could do or...

PIERRE ANTOINE VERVIER: It's ?? so you mean hijacks of basically already announced space by using more specifics? It's something we haven't actually observed. We are looking for that and we haven't observed such cases. It's also harder to investigate, but we haven't really observed validated cases.

AUDIENCE SPEAKER: Second one: Is it possible to tell us how many of those 29 prefixes are related to this region or whether ?? or was there a region that was more attacked than...

PIERRE ANTOINE VERVIER: In the observed cases, most of them were in the RIPE region.

AUDIENCE SPEAKER: Okay. And the third one: Did you actually contact the RIPE NCC to tell them, hey, this may be a hijack?

PIERRE ANTOINE VERVIER: Not yet. It's something that we could actually do. For now, I just investigated those cases and I also want to, you know, gather a lot of evidence before engaging into some further steps with the other entities, but yeah, it's something we could do.

AUDIENCE SPEAKER: Last one, I promise I'll leave. You said that you had ?? already working with some ISPs. Do you plan to actually make this data publicly available so that...

PIERRE ANTOINE VERVIER: So, I cannot say how this data could be accessed because if I was in the academic area, I think they shouldn't be any problem. In fact, we have, as a company, we need to pay attention to what kind of things we say. So, our goal is to share this information with network operators. I don't exactly know how but this is the objective.

AUDIENCE SPEAKER: Thank you.

AUDIENCE SPEAKER: Mikael Abrahamsson. So this happened to us. This was a /24 RIPE region, had a route object with our AS number; all of a sudden, this was announced in Russia. I contacted ?? this prefix was actually in their AS path that has already been called the object, it was probably registered with them with the actual prefix, so we ?? I started contacting the ISP, whose customer was announcing this, and their upstream. This took two weeks to ?? and we actually removed the route object, and so on. I'm sure someone was making money off of this to look the other way. The only way we can fix this is there should be some kind of accountability or we need to get a technical means in there to actually verify that somebody is allowed to originate something, RPKI or something like that. Even if you discovered it and this had been announced for quite some time, it wasn't detected or flying under the radar. We were getting complaints and this is how it was detected.

PIERRE ANTOINE VERVIER: I think this is one of the reasons why it's so important to detect those cases because while you look for those attacks in your networks, but there are many networks which are actually not even aware that some of their prefixes are hijacked. And so it's important to detect those cases as soon as possible and to contact the network operators and ISPs to mitigate as soon as possible because it takes time and the sooner we detect them the sooner...

AUDIENCE SPEAKER: If there is a research in here who had like to look at where things are announced there is one or two prefixes that is completely AS paths from all the other prefixes announced by an AS and automatically look at this and send e?mail to ISPs, I think that would help in the short?term. So, if anyone wants to take that up and implement that, it would be greatly appreciated.

PIERRE ANTOINE VERVIER: We can discuss this. I would be interested in looking at this case in more details.

SHANE KERR: I'm going to have to close the lines here because we are running out of time.

AUDIENCE SPEAKER: Nick Hilliard from INEX. Pretty he much the same question as Mikael was just asking there. Did you do any analysis into who the transit providers were for these hijacks and who their upstream providers were? Because if you inject a prefix in, you have to inject it in through somebody who will accept arbitrary prefix injections?

PIERRE ANTOINE VERVIER: We observed different areas and different groups of providers involved, so, from what I remember, it was mostly locating eastern Europe countries, and yes, we could only find some networks which were apparently ?? that cannot say if they were in part of the attack or if they were really, if they really had that intention or if they ?? well, if they just accepted peering with bad autonomous systems, but, yeah, well we can easily identify those by just looking at AS path and...

AUDIENCE SPEAKER: I'd really like to see a name and shame system for this.

PIERRE ANTOINE VERVIER: I don't have any name to say, but... we can also discuss it later. I can also give more information about the cases to those who are interested.

SHANE KERR: I believe it was Wilfried ?? Rudiger.

RUDIGER VOLK: I am a little bit curious, did you systematically collect whether the hijacked prefixes had some kind of IRR documentation or even ROAs? Kind of, one of the reasons to ask is, is there potentially a thing where, well, okay, if you are not using a registered route, is that an invitation to get that hijacked?

PIERRE ANTOINE VERVIER: So, I didn't look at the ROAs, and I didn't really quite get the other part of your question.

RUDIGER VOLK: Okay. Were all the hijacked routes documented in the IRR with covering or exactly matching routes?

PIERRE ANTOINE VERVIER: With exactly matching records.

RUDIGER VOLK: All of them?

PIERRE ANTOINE VERVIER: All of them.

RUDIGER VOLK: Okay. So that's a warning, if you are not ?? well, okay ?? and this was all taking away active address space or no, it was dormant address space.

PIERRE ANTOINE VERVIER: It was dormant address space but assigned.

RUDIGER VOLK: That means if you have dormant address space with registered routes, better withdraw the registered routes.

SHANE KERR: Final question.

AUDIENCE SPEAKER: It's usually when someone else's prefix gets in hijacked people get in upset and IPv4 is exhausted and it's impossible to get someone's prefix without disturbing someone. So, what do you think happens in IPv6 area in this regard?

SHANE KERR: Just to make sure that we have one more v6 talk at the end of the session here.

PIERRE ANTOINE VERVIER: Can you just repeat your question.

AUDIENCE SPEAKER: What do you think happens in IPv6 because address space is sparse and there is a lot of allocated space ??

PIERRE ANTOINE VERVIER: That is a good question. We would actually like to look at that because there is definitely a lot to look at. I mean, as you say the address space is huge. Probably a lot of space available to hijack. The only problem for now is that we don't have the security?related data for IPv6 to check with the suspicious BGP announcements, but as soon as we have that kind of data, I guess we'll start just looking at this because ?? but, I cannot say now a lot about this because I haven't really looked at the spam for IPv6. I guess it's ?? maybe it's already happening, you already have that kind of hijacks in IPv6, but I cannot say just yet.

SHANE KERR: Thank you very much.

(Applause)

SHANE KERR: All right. That's it for this session everyone. Thanks for being here. Please remember to rate the talks. It's very important to us in the Programme Committee to know what you think of the talks. And we'll see you back here in two hours.