These are unedited transcripts and may contain errors.

Routing Working Group
17 October 2013
2 p.m..

CHAIR: Hello, welcome, we are going to start, it's just past the hour. Welcome to the Routing Working Group. Hopefully that's the one you wanted. Otherwise you have to move to the other room where the other Working Group is taking place, the Anti?Abuse. We know there is some comments on the clash between this Working Group and the other, but people want to be in both at the same time. We try to do our best. If it really is a problem for any of you, come to us and we'll try to figure out a way. But there will always be clashes as long as there are two sessions in parallel.

My name is Joao Damas, my co?chair is Rob Evans sitting here in the front row, doing the great job of putting everything together. A few preliminary agenda items. We published the minutes of the previous meeting sometime ago on the mailing list, we haven't received any comments. Are there any comments now or shall we just take them as final? Okay. They are final.

I would like to thank the RIPE NCC for providing the Jabber and minute scribers, Michaela and Rob, here in the second row, thank you very much this is a really helpful job.

I'd also like to whole heartily thank the stenographers for the stupendous job they do, in particular one of them, Anna, whom you never see because she is sitting in the back correcting all this stuff as we go live, because it's her birthday today.

Happy birthday, Anna.

You see the proposed agenda right up there, anyone want to add anything. Otherwise there is a slot for AOB, you can always come to me during the session.

If not ?? well, there is one more item, sorry. There has been some discussion about what to do or not to do with the R2 set, it has not gone anywhere. Lack of people going beyond the first post, and following up. If there is still an interest, a real interest in having a discussion about these, then Rob and I are willing to listen to the people who have to say anything and try to put a structured discussion for next time. But for this one we thought there wasn't actually any follow?up worth doing.

The chartered discussion item name, number B, I'm guilty of not following up on that one. So, what we have decided, Rob and I, is that we are going to be welcoming input through the next few months with a goal of publishing the draft recharter of the Working Group like in January, after we all come back from the holidays, and discuss it hopefully at the next meeting. So, look forward to that.

And now we go to the meat of the session. First talk is by Anastasios from Forthnet on reinventing the Access Network.

ANASTASIOS CHATZITHOMAOGLOU: Hello everyone. Good afternoon. This was initially a much larger presentation, so we have got some slides and probably just to move forward a little bit faster in certain areas.

I have already submitted a few of them which seem to be lightning talk for tomorrow but I don't have any answer until now so I don't know what happened. Anyway...

So reinventing the Access Network. This was a project we started ?? we started a few months ago and the main idea of this one was that we wanted somehow to change the whole access network, and by changing that we also had to change or aggregation network and all this stuff. So I'm going to present some decisions we took and the final design that we implemented and the protocol that we used during the migration.

So, this is generally the high level of our and maybe a lot of other providers in the world network. This is refers mostly to providing service to say customer so we have our subscriber part, we are our RS N L work. Then we have or aggregation, where we aggregate a lot of access regions. Then we get the edge, distribution occurred, so by running this project, we actually had to mess with all this stuff, but we refer to the access because we had to design almost everything in that one.

So, some terminology, most of you will probably know that. Just to skip this slide.

So services, our main focus was on the residential services. At the same time we didn't want to affect the business service because our plan is also to migrate the business services also. So, in terms of residential services we only have layer 3 services and these are the ones we use for each one of them so we have to take into account these technologies while designing the network.

This was our very first initial network. It's most probably you already have done something similar way in the past, that was in 2006, so we have an NSDL link capacity, every local extension gets a specific part of the bandwidth and we are happy everything works fine.

We needed to increase the bandwidth. So we started and then we go to 1,000, it says 1 gig per local. So we had to split the ring so now we had two rings, fibre connections between local extension and aggregation on private line so we don't have an issue with that. So, we did the solution in some months we arrived at 2008. After some months we also had to move another step so, we had to increase and make a single local ?? somehow it's above 1 gig of traffic. What we did also at the same time had a lot of more dislambs so we insert a single local layer, move for DSL lance, make a connection between them and ran it over that and so we have the link we then, using its own up link so we can, if nothing breaks, two simultaneous up links running on the network. Sometime after we had to do the same for the other stuff, until the time a single SDL link had enough capacity to accommodate our needs.

When we reached a point last year where we had split the SDL link in every possible way, so we had a single link for a single local links end. That was the maximum we were given so we had to think of another way in order to accommodate the increase of customers yet at the same time keep the SDL latency at the lowest possible SDH convergence at the lowest possible level as with the SDH 50 milliseconds. So we had to think what would be our next steps. Besides the capacity that we had, this was all the other issues we met while running the old network, so we have legacy SDN, we had large L2 domains. We had limited VLANs, even with Q in Q and I can explain that a little bit better, pause even with Q in Q you might have points in the network where you are sailing without a VLAN which means at this point in the network you have traffic from all inside VLANs users serving a /KPHOPBT path which means if you somehow terminate Q in Q in a layer 3 router you get broadcasts from all the inner VLANs to every type of terminatable LAN, which it is. And of course the capacity, we had to move beyond 2 gig capacity, and we didn't want to invest on other technologies because of a lot of limitations. Of course, we want to keep the redundancy. We had STP redundancy based on SIP, and in other cases we had problems with Mac address space. We had a lot of Mac addresses so we had to find a solution for that also.

So some requirements of the new design. That's what our most important points were based on the old one. So it would be transparency around we had to install an agreement that would be ?? we wanted to policy for the two protocols because we wanted to have support especially business customers in the future. We wanted support. Jumbo Frames, absolute supported. Active redundancy, this time we wanted to improve instead of ?? a move to active active. And we had two categories of redundancy, we said that if we could do 50 milliseconds redundancy for direct link and node failure, it would be good for us, and if we could go below 1 second for anything remotely. And the most important part for our network is no need for large escapability. We have, in comparison to other large networks in Europe, we don't have so many you devices so we didn't have to take into account many things with regard to expanding and all that stuff. Of course, we wanted to standardise and prototype and formulate everything so to be ready for the future and you'll see why

This is the solution that we ?? we had the solutions based on regard wear, one slewing was increase the up links or the access switch. That was abandoned, and the other was at the same time increase upgrade SDH to IPv4 capacity but we said we don't want to invest to SDH any more so, the solution was a problem from the beginning.

Next test we had to decide where we'll go with layer 2, and whether we'll go with WDN or we would completely remove SDH and all the this knowledge in the network. This had some questions that we were left. The final decision was this, so we went with layer 3 access device and we completely removed the SDH from our network.

So, layer 2. These are the solutions we checked while waiting the other solutions. We experiment with those technologies. The first one is probably known to you but I don't see, we haven't seen any good public exposure on Europe, so if anyone actually use it in Europe, I would like feedback later on, please find it because we are interested in using it in other parts of the network the the major problem with that it was too complex to set up. We need extra VLANs, so we had to use extra VLANS on the rings and to set up, the setup was quite complex so, that was abandoned. Then we took a look at 3 LAN ?? this technology was ?? but they had active in their routing inside them so there was something that we could probably use. While searching for them we found that no other product was supporting them, the only product was sent risks focused and the fixability was very limited, a lot of currently running on the ITF. But it will take a while.

Then we have a lot of vendors proposing their own solutions, most probably each one has its own, most of them are probably local somewhere. We didn't try some of them. They were working they were doing the job but we didn't like the solution for the vendor solution and following a local network. So we decided to follow the last one, okay, move to an architecture which uses layer 3, which is MPLS which is already used on the other part of the network core and sometime probably connect the two layer 3. Now layer 3, these are technologies that we used to evaluate and say okay we move to layer 3 but we need to find some protocols, that will do the job for us.

The first one is okay let's going for the single IGP, try some optmisation on the timers will do the job is the easiest solution, it works almost every time, and in some cases you can, by finetuning, you can get better results.

Then we had the engineering with FRR, we use this in very specific parts of our network. It works, but it's very, very difficult to end provision and keep an eye on on all this stuff and it's because limited applicability when you want to automate something. You have to use external tools and it doesn't scale so easily for us. So, we had to move to another resource, another technology, that's when we evaluated RFP RR, most of you will know, which is, which can be combined with remote LFA but we isn't have a chance to try remote, we only tried the LFA. The easiest thing of that one is that it works and it works very easily because you just have to enter two or three commands and you have that wording. The problem is that it can not carry hundred pun percent on every single topology. There is a solution for that. You also have the chance of creating micro loops when you have devices, and it runs things that next hop is the other one and the traffic is going for a time between them. At the same time, we evaluated the use of BFD, we said we use this in order to improve the time. One limitation we made that some devices use lots of interface where is you have to configure the layer 3, which means that if you have a fibre loss, which wouldn't need the BFD, in case you have a fibre loss, there is a delay between moving that fibre loss to the protocols. So we say that okay let's try BFD in order to improve that. And we also found that BFD was running a software so you might have some limitation there. Since we were already going to use BFD only on the point to point links and it acts as device access only to links, this is not an issue.

Then we decided to go with MPLS and VPLS and fallback to ?? with MPLS. And as a last technology, we just have a look at, it allows you to move through the wires from the aggregation to the edge actually, so you omit completely the layer 2 aggregation, but it moved from our design that we had to create much more through the wires in the usual way.

So, this was the original network, a very, very high level, was most of them is the SDH multi?flexers with SDH rings and the capacities of them and this was something we wanted to move on where we have aggregation devices, redundant, and we have that.

So if we can make ?? the new design we have only fibre, we didn't need any more fibres. So, we had enough. We completely removed SDH, we limited the layer 2 domainses on to specific paths. Instead of having VLANs, we move so we had millions of them. We increased the capacity. We changed to IP and we increased the Mac access space.

So what's the layer 3 network? The new network? So what actually we have currently this one, so it's an SDH ring fibre, so the gig gets split into whatever. The access path is the same, there is no change over there. Its access reach as a 1 gig up link, but this time it pass through the wires so we have active running on two aggregations. Then you have local at that. Sequence for running the traffic to this network. And this this network response according to a technology using this technology.

So this is internal, if you want to have a look at the low level, so for example, in aggregation network, we have two entities, so we use the these and these were connected together. So what we actually did is we create some access sequence, some at that. Sequence, directly to the BRAS BNG, then connect them to the VFI. This is comes from the access router. So we had a VPLS network ?? VPLS takes care of the sealing of stations and we have everything working, but at the same time, I didn't support all the access routes, it didn't support VPLS. So we had to think of another solution for a device that had limitations. So, for example, went we didn't have v5 support we go with the disease domain, this is a local LAN where you connect the local devices and you can have a single and ?? it still works but you can active VLANs, nothing else changes.

And the last and worst case is when we do not have seen the domain solution, so its port could not talk locally, so we had the so called free set flow point where we had one PFP and its equivalent.

Okay, no IPv6. OSPF network, zero for aggregation, and this area with codes that define the POPs that connect the links. No external fixes, external prefixes. We want 100 multi?player 3. We found bugs. And LFA support only for loop backs. We also find bugs there.

We go to the using specific type, so we used VLAN address service in order to automate everything. MTU, control word enabled, because we had some issues with Mac addresses starting with 4 and 6. So this doesn't work very well in load balancing. Active stand by when otherwise attached EFP. And of course if there is need to have connection between EF Ps and you need to have connection, you do the split horizon on the aggregation, so we need to take care of that.

Access router management. We decided to go okay let's use a different hoop back on the network and try to apply different static route recursive for an aggregation back into the network. At the same time we are also announcing two different routes through the aggregation routers in order to be revend the and while the one of the router was originally aggregation, was getting into the VRF into the management level.

Okay, load balancing. Since we had been using ?? this is a major issue probably all of you have met, it's not very easily load balancing when you have had wireless. The solution which was through the wireless when you get a new labels, that means that you have to support ?? on both sides of the wire, like ??

Many platforms use this what we call a label hack, what you are actually doing is insert a label lowly on the router, in order to use that label and guide the traffic to a specific outer phase and before the traffic leaves the outer phase, they leave the label, that also works.

So, this is an ?? if we say that we have local LEs, so, a link has legs on both sides, on both routers, aggregation routers, through the wireless here and traffic. Then we say that okay, let's connect different aggregation POPs together and have the legs of the link step on two different POPs. Nothing changes, everything is IP, so just change the destination. Very easy.

So we have something like that where we would have access links travelling through the wire aggregation.

The migration steps, I skip them. And this is the high level, when we have two major POPs where each one has its own edge network. Currently we had that. And we also have layer 3 connectivity between the layers deaggregation POPs. So we thought why not combine all those and make a bigger cluster of edge in the network, so access devices from wherever in the network can use whichever edge router they want in the network. So, we just connected them with fibre. Layer 3 again, no change in the network. Much filter. And then we can have through the wireless going to the other BRAS, all the BRAS have the facility and everything works automatically.

So, we ended up with a fully implementation. This is almost the final slide and we said that okay, we have everything working, we have our red light in the mess, we are seeing everything. We are still missing something, we miss a way to automatically create a rerouting of traffic in case of problems, and the rerouting in a different way from what the network itself provides and also we missed a way to do provisions.

So we thought of that. This is experimental. It doesn't actually be in production. This is something I prepared some slides on. So actually it's a collection of tools that actually takes some input, processes them and automatically creates different routingings from the network or provision services without the usual access having to do anything besides typing single description on interface.

This is what's called pooh man's SDN, my idea. Okay and the last slide, these are the things that we are looking for in the future. Most of them are still drafts. I will skip most of them but just give a bit of a description of the blue ones. VPN because we like the idea of Mac addresses through BGP. Segment routing because it's very interesting to have the forwarding path information coded into the packet itself. And the last one, max. Numb redundant. First of route that is supposed through the level and some explanation already done to provide 100 percent coverage in every topology. It's too early for the last two of them I believe but we keep an eye on them.

Thank you very much.

AUDIENCE SPEAKER: Michael apples sell. The BRASes there, are they there because you needed to run PP O E or what is the reason?

SPEAKER: We didn't want to change our back end which is based on user name to the gate room. We kept everything as it is and we kept PP O P running. We keep BRAS running for PP O E running.

AUDIENCE SPEAKER: If you had change this had you would have cut down al dramatically on your need for layer 2 forwarding. It would have really simplifies the network.

ANASTASIOS CHATZITHOMAOGLOU: We probably have moved the ??

AUDIENCE SPEAKER: Yes, because you could have routed it way way further out so. Are you doing resale of access, wholesale via PPOE or is this just internally?

ANASTASIOS CHATZITHOMAOGLOU: Networks it's something that is being carried on because of the past.


AUDIENCE SPEAKER: Alex RM. Did you consider placing your bras closer to access layer?

ANASTASIOS CHATZITHOMAOGLOU: We also had a look at that and we also had a proposals from the incumbent to do that, but the problem is that the closer you have to move, the more devices you have to put.

AUDIENCE SPEAKER: Yeah, but you have smaller layer 2 domain. So that that is great and that is flexible.

ANASTASIOS CHATZITHOMAOGLOU: Yes, it's still large enough. I don't have the numbers from the other side, the number of actual devices, the number of aggregation, the number of BRAS would give a better idea how large is the network. It's still ?? it's not so scaleable, at least at this moment. Is if we move them closer to the edge we have to almost double them, not exactly but...

AUDIENCE SPEAKER: And you mentioned that you have business customers. Does the traffic goes through BRAS or ??

ANASTASIOS CHATZITHOMAOGLOU: No, the same, but we don't have BRAS, we just an access routing terminating the heavy traffic. There is no difference. The edge network is a network that consists of BRAS for ?? and some access routers, MSC routers or multi?service routers for business customers.

AUDIENCE SPEAKER: Peter Lothberg. So you showed a drawing of the network. Do you have a drawing that showed the organisation that runs the network? Is it like one router per manager or is it one router per person?

ANASTASIOS CHATZITHOMAOGLOU: No. A single organisation manges everything.

CHAIR: Okay. If there are no more questions, then thank you very much.

Next up Jean?David please.

JEAN?DAVID LEHMANN: Hello everybody. So this part of the discussion will be more about routers and a bit less about networks. The idea is to introduce a new technology that compass has been developing for a few years so I'll share a few of the observations in some directions we see in terms of hardware development and what compass has been doing in this direction to bring innovation.

So everything started with ?? the whole compass initiative and R&D project started with a group of people that worked a lot around what could be the evolution of routers in the future. And taking lessons from the data centre market where we have seen a lot of evolution and significant changes for the past ten years where it's not just about clustering and virtualisation, it's also a lot about efficiency that's been brought to the data centres and as a matter of fact, we have seen two different industries evolving. On one hand, data centres were ?? we've changed completely the paradigm, where in the past you had main frames and seller, vendors were typically designing their software tomb the main frame capabilities and vice versa, and Intel architecture brought a lot of changes into that, changing the ability to stop relying just on the hardware. So, as a matter of fact, today, the Intel based platform will really not care about what signed of software is running on the server, wherever the server is. So we just understood it's not to do with routers and the need of associating the hardware to the software is really, today, they are present, and I think features, capabilities as we can see from the different presentations, has led to building more and more complex routers and inseed of going into simplification, ease of management, we are going in the opposite direction, so if you look at the networks today, they pretty much look like the same like it was ten years ago in terms of a router, internal architecture, except that in the rate of capacity, we are adding just more hardware to the problem just to grow the capacity of the equipment on the one hand and when we reach the capacity of a specific equipment, it's throwing even more hardware to the able to interconnect this equipment and add external switching fabric or metrics to say interconnect those routers.

So, the whole work that compass has been doing was trying to change this momentum and find the right technology and what could be the evolution in terms of the next, let's call that the next generation or next evolution of core routers, and it's a lot about their co?has beentation between the hardware and the software and the one hand and it's also about electronic and copper limitations.

So, presume in this challenge, the challenge of electronics and the challenge of the component and the chip, we identify two main challenges.

The first one is pushing the IO outside of the silicone chip, that's one issue. The IO is a limited ?? you can transport 100 gig signal up to 2 to 3 centimetres, so that means that on a specific chip you need if you want to add capacity to ?? amplify a signal, that requires more electronics, that requires more cooling and space, so as a consequence, isth has an impact on the the architecture, it has an impact on the hardware you need to have in specific equipment to be able to scale in terms of capacity.

The other change I will spend less time on that, is the density of the IO chip is also limited, reach a limitation of approximately 2 gigabit per square mmm today and we were also working and we have been working, or indeed to find a way to scale that number as well. And that has an impact obviously, because you need constantly to compromise between the processing and the IO and to make some choice to design routers.

So, that was for us the big part of the R&D and investment that has been going on. The company exists since 2007 and has been investing quite a significant amount of money in terms of R&D to get to what we call IC photonics technology that is the first, if not among ?? the first chip?to?chip optical interconnecting site networking equipment. So, if we just drill down into a better explain the technology and to explain also the challenges that we have faced to develop this chip.

If you look at the, on the left picture, the picture on the left, so, in the hand you have a standard similar os chip except in the middle you will find something very special and for all of those who are interested to check it I have a chip on me so anyone wants to speak with me after the presentation is more than welcome to check that as well. So we have a very even, we'll say, strange and unusual window that you would not find on many CMOS chips. This window is divided in two sections. On one side you have the laser matrix that is aggregating 168 vexel lasers. Then on the other side forth input you have photo detector. So that gives us this, today at the stage of the technology we have been implementing in this chip, we have a 1.34 terabit inside and outside of the chip. By doing that, we have been also working on whether the challenges I was describing a bit earlier, which is the IO chip density, scaling this density 32 times higher than what we have today on the market, where there is the density of 64 gigabit per square metre.

The advantage of integrating that and I will describe that in the next slide, is also about the ability to now interconnect chip?to?chip and core?to?core inter?connection at a much further stance and the limitation before from the few centimetres to today we are talking about 200 metres that we can guarantee, because we are using multi?node fibre, tested in the lab at 300 metres. So that gives a different perspective on how to manage this, the optical signal between the current and also between the boxes.

As you can manage imagine it was a long development problem so many challenges for this. One of the real challenges was also what we call the marriage between laser and C?Moss chip because we are dealing with two very different material. On one hand we are dealing with C?Moss, on the other hand with I don't want to needn't too many detail but just stabilising the technology to glue the components has been extremely challenging, because the link is purely simply breaking for just the reason that the heat extraction coefficient of those materials is completely different from the fact or of 1 to 6. So, a lot of R&D, a lot of research, a lot of tests to get to this successful and stable direct coupling of the C?Moss chip.

We are also talking about interesting lower power consumption. As a matter of fact the chip is consuming I think something like 2 watts or something like that. So a lot of development. Advantage of this technology, the scale as well, so it's obviously something that is getting a lot of bandwidth at IO level and I will show at the next SLA ID how much this technology can scale.

Last point. It's not just an R&D chip. It's a router that is in production and a component is using in production networks.

If you look at the scale and if you look at the form factor, this icPhotonics technology seems to us a very interesting way to scale networking equipment, but we understand that many other manufacture like Intel or others are also working on this technology to scale the system with something that will apply to computing. This is something we will apply to storage, to overcome the scaling and the capacity issue. So we are using, today, vexel lasers that have 8 gigabit per channel capacity, but we understand that this technology, it's not lab or research, can scale up to 40 gig bit and the advantage as well is that the size of the vexel matrix will scale as well. So we'll be able to have more bandwidth in the lasers on the one hand, and a bigger matrix that will allow to us aggregate more lasers.

So. The whole idea behind that was also to work on the architecture of the routers and on the way to scale and match the need of the growth in term of capacity, while also keeping the router architecture simple.

So, it's about, we think, a major architectural changes that has an impact on the the design itself of the router, so if you look at, we'll say, traditional way of building routers, you would have line currents, you would have a mid?plane, which is a very, very complex element of the design of a router, and obviously a switch fabric.

The router that compass has been designing using this IC photonic technology has changed in terms of design and architecture. You still have the line currents. On each of these line currents you will find two of the IC photonic chips and there will be only a full passive optical mesh interconnecting the line current, and interconnecting the chip. So each chip, two of them on each line current, four line currents per router, so each chip will be discussing, will be fully meshed via an optical plane with 7 other chips in the box. So it's about the design obviously. And this design, considering the gain in terms of hardware components is generating not only saving on the size of the router, but as well on the routing and on the behaviour of the router. The full mesh is also allowing to avoid the congestion even if the traffic is asymmetricalm which is not the case in we'll say existing router, even if you had a non?blocking fabric. So it's really about this whole development has led us to build a different router than that we think will believe will allow us to keep not only the boxes for simple in terms of the way they are designed, but also the way networks will be designed in the future, and we look at this element as we say, a building block and a more flexible design and architecture and a vision of also network virtualisation. The whole idea and I know there has been a lot of discussion in this domain, is how to scale the routers, while keeping them as a very strong and powerful meshing and having externalised features and software feature. So a bit like the evolution from the main frame to the servers. Today we have a very strong and powerful Intel?based architecture and the processing power of an Intel?based server is much better than the routers on the market today. So why not, having the choice of managing many of the features, software features that makes this existing routers very edgy, outside of the box, so we are talking about the this and we see it as a very disruptive development and one of the chance of compass is to be able to arrive at a time where there is this technology is available. There has been many incentive to change the architecture of the existing routers but that were mainly hardware initiative so we think that by combining a very strong and innovative hardware on the one hand with a technology that is the future of networking and telecommunications equipment and on the other hand, using the benefit of the SDN where we will look at networks from an application point of view first before looking at it from a hardware point of view.

So it's about the ability to scale that. And using the SDN and having also an architecture that will have practically no limit in terms of the way the router and the capacity of the router will be able to scale.

So that's work in programme at extra and we just got yesterday by the way a price on this subject, on the work we have been doing but not something that the company is today delivering.

So, simplify the network in the routing building blocks, scaling the architecture by using the optical mesh in a sense of what's has been done inside in the box, with the interconnection and the passive optical mesh will be also used to scale and interconnect the router. So a bit, if I can make a parallel with switching where today switching technology is quite easy to interconnect switches using virtual chassis technology, we are working on development that will allows you to interconnect these routers using an external simple patch panel and extend the optical signal outside of the box to be able to interconnect routers and the work we have been doing so far is going into the direction of having up to 1 terabit line currents and being able to interconnect more than 20 routers to be able to make a POP of more than 80 terabit, that's the magnitude but it's also still a lot of R&D and development.

So how IC photonic was playing here, I was trying to make a conclusion and make it more concrete on the go from this IC photonic technology to the concrete benefit that we see in the future for routers, especially in the peering space.

Obviously, we're talking about a technology that is helping us to reduce the size of the box, so reducing what I'm saying, reducing the size of the box I'm talking about equipment that will be able to process 1.3 terabit of traffic within 6 U and that will consume, it is also consuming 2.5 kilowatts. So that is a pretty amazing development and you can understand the advantage in terms of operation cost for routers that will need to scale.

So it's really about cost efficiency here. It's also about efficient port density and scale. This optical mesh is a congestion free architecture that will enable also to scale with dense 100 gigabit interface card. So you see the road map of the vexel, there is practically no limit in the capacity we will able to have in the IU on the chip. And when coming with technology like CSP4, and we are working on that with some industry. We are also working on prototype of line currents that will aggregate six times 100 gig in one line call. So no limitation on the process, and the forwarding will allow us to scale much more dense, high density solution.

Availability as well, we are talking about a solution that is intensively using optical element and optical components that's by the finish and a much higher NTBF than electronic on BCP cards. So that's also an advantage.

And last but not least, security. Security, I'm talking about behaviour of the router and the way the passive optical mesh is playing here. Many mechanisms that are inherent to a router is talking about the congestion management and the quality of service, so using the full mesh for example for a sent rised policing, is allowing us to implement mechanisms such as real output queueing which has a real advantage on how the box is behaving.

So, a lot to do about control plane protection without compromise due to this different kind of this different kind of architecture.

Thank you very much.


JOAO DAMAS: Thank you /SKWRAPBD David. Any questions for him? Thank you very much. So, I see Alex. Thank you. A service announcement from Alex on RPKI.

ALEX BAND: Hello everybody. My name is Alex Band. I work for the RIPE NCC and I am the product manager and I just want to give you a quick update on what we're doing with the resource certification service that we have.

Because we did something new. We have minority space. And minority space is a bit of a vague term. So I just wanted to give you an explanation what it is and how it affects you.

Because the RIPE NCC, of course, we manage lots of address space and what most people think about when it comes to address space that we manage is the /8 blocks that we get from IANA. But in the olden days in the Internic days there was lots of address space that was distributed and managed by the RIPE NCC that didn't really come out of a particular /8 block. And when the RIRs were established, a lot of small little pieces fall on our management but the over arching /8 management false under the responsibility of another RIR. So this space is what we call minority space.

And this is an example. So, for example, ARIN has 128 /8 under their management, but as you can see, there are lots of /16, /15s scattered around that actually fall under our management. Now, in order to put this on a resource certificate and actually issue resource certificates to you containing addresses out of this pool, is a little bit complicated, because if you follow the RPKI tree structure, IANA really only knows about the /8 blocks that were distributed and it doesn't have an awareness of all this well small cruft. So, we built a framework that allows us to cross sign all of these resources. So, it's actually quite a large and complicated implementation that we had to do on the back end that doesn't really affect you or isn't really noticeable for the average user. So, what we had right now is that all of the minority space that falls under the responsibility of a certain RIR, we put that on a separate self signed certificate, and this certificate containing all the minority space that for example falls under the ARIN responsibility can be signed using the ARIN route certificate and with that signature from the ARIN route certificate they attest that the data that we have about the address space that we manage is actually accurate.

So, how does this affect you as a user? Well, minority space is completely transparent to users, to LIRs, so you could have an allocation that comes out of minority space and you would just see it as provider aggregatable, but it could also be a legacy space or EX space or any other kind. You can't see for example the RIPE database object, this is actually part of minority space. The only effect that it would have is that previously you wouldn't see these resources on your certificate because we didn't have the framework yet. But right now, as this implementation is now finished and we made it a production service last Thursday, any LIR who holds address space in the minority blocks, if they have the following status, they will now see them on their certificate. This is completely automatic transparent process. So, if you already had resource certification enabled, you will now see additional prefixes on your certificate if you had them in one of these minority blocks.

Now, this truly means that all ranges of address space, all ranges are now eligible for certification. However, there is all types of address space are eligible. (Not all (in these are the most notable /SEPGSs. Provider independent (exceptions) and user address space is not yet eligible for certification, there was a policy proposal for this, 2013?04, and consensus was reached on this protocols proposal yesterday. This means that we can start talking about an implementation on how to do this, well right now essentially. So, the RIPE NCC is currently drafting an e?mail which we will send out to the community to talk about how we will do the implementation of PI end user space.

Regard to legacy space there is another policy proposal, which is 2012?07. Version 4 was recently published with a new Impact Analysis and some new additional information and if this would reach consensus, then that would also put the RIPE NCC in the position where we can issue certificates over this address space and also there we have to talk about how we implement that and what conditions legacy address space holders would need to comply with in order to receive a resource certificate.

That is really all that I have to say about this particular topic, but as I have a couple of minutes left, I'm just going to give you a quick update on what we're doing and some new features.

/T*EURS of which is the RPKI validator so, the RPKI validation tool that the RIPE NCC has /WR*EUTen now has a receiptful API. This was very, very high on the wish list of many of the users, because in the design of using RPKI, a lot of it was really focused it on using it directly on your Cisco or Juniper routers, you could for example create route maps on those based on the RPKI data set. But there is a lot of people who would like to use the RPKI data set for monitoring purposes or in some way use it outside of their router configuration. For example, for alerting. So you could hook this into /TPHAD os for example.

With this receiptful API you could brought through the RPKI validation tool that you have running locally and ask about the validity state of a certain BGP announcement and the return is the validity state and also the reason why its invalid for example. So if it's invalid thousands the prefix is being announced from an unauthorise AS, it will tell if you the prefix is invalid because it is more specific than is allowed by the route origination, it is also tell you. So you can use this with the script around it and use it within your local configuration.

The other thing that I want to make you aware of is the RPKI dashboard that SURFnet has provided. This gives awe global overview on the adoption rate of RPKI, the data quality of RPKI, and all kinds of other useful elements about this service. What we would like to do is give you good insight on how the RPKI data centre is data set is doing. To me the outset was is the highest possible goal for me is to get the data set as accurate as possible, to get everything as reliable as it can possibly be, and if you are using the RPKI data set to base routing decisions on, it the RPKI dashboard gives you a good insight into what a data quality actually is. So, have a look at it. And have a look for your own AS. Have a look at your own prefixes, and check the validity state and also if you find things that others have done wrong, alert them of that fact.

Then some stat /ST?FPBLGTS actually the service is doing quite well. About 1700 LIRs (statistics) currently have requested a resource certificate and they have made CRYPT graphically verifiable statements about their intended BGP configuration and if you look at for example IPv6 adoption, about 800 prefixs are currently covered by ROA. For IPv4 about 5,000 prefixs are covered by ROA and if you see there is a spike over there where you see the adoption rate really rising rapidly. That is at the point where we implemented the new RPKI user interface. We tried to make it as easy as possible to get started with the system and give you suggestions and show you what we think, according to our route collectors, you are doing with your BGP prefixes. And because we are helping you, this is really spurring on the data quality and adoption of the system. So there are currently 5,000 prefixes covered forward ROA for position and that covers about 400,000 /24s. So that's about six /8s worth of signed address space, where you can be absolutely sure that the statement about the intended BGP configuration was truly made by the legitimate holder of the address space and not by somebody else.

That's it. Does anyone have any questions?

AUDIENCE SPEAKER: Andy Newton, ARIN. You said these are self?signed certificates?

ALEX BAND: Yes they are.

AUDIENCE SPEAKER: Can you go into more detail. Are these new Frankses or are they signed ??

ALEX BAND: They are signed by our own trust anchorses. So they will be in a position to cross sign this, this can be signed by the ARIN certificate, if all RIRs are in a position to do cross signing. But we have to have some talks about this especially in the IETF sphere, on what the requirements are in order to do this and to put ourselves in the best position to take a step forward.

So, right now, for the user side, the attestations that we make with regards to the minority space and all of the prefix that is we put on the ARIN minority certificates, is essentially an attestation that is made by us and isn't signed by ARIN but the framework allows you to have it signed by ARIN eventually once we have had these talks.

AUDIENCE SPEAKER: So the intent is to reuse the keys in these certs to have them signed by an ARIN certificate.

ALEX BAND: Exactly.

AUDIENCE SPEAKER: Peter at the moment he shall. 1418 rape is the maintenance of it and some of the blocks in there are ARIN and APNIC. Are you also doing the certificate signing for them too as far as framework?

ALEX BAND: Yeah, so the implementation that we have made, in order to allow cross signing, all RIRs are doing this allowing us to cross sign, so we can also attest to the minority space that for example APNIC or LACNIC has out of /8 blocks that RIPE NCC manages.

PETER THIMMESCH: Are are you going to publish the manner or manner of verification between you and the cross signing.

Barred barred: Publish the method you say?

PETER THIMMESCH: You are asserting that ?? or ARIN is an asserting they have done the validation on a block 141. You are then doing a cross?check and you are the certificate authority. How do you verify that? What is your method that you are verifying back and forth?

ALEX BAND: The method for verifying that that is essentially comparing registry data, and that is actually a process that we have gone through over the last years, and currently we are really in a position where we are a hundred percent confident that all of the minority space. If you compare the different registries, that there is no conflicts and there is no overlaps. So, all of the RIRs are currently unofficially agreed on who manges which minority space. But that is really an attestation that the RIRs can only make between themselves. There is no over arching entity that could provide that kind of solid proof, so to speak.

AUDIENCE SPEAKER: Rudiger Volk: The overall thing is really nice progress. All the cranking details can be interesting. For the just discussed question, well, okay, how well is the division of the address space assigned between the involved RIRs, is there actually a published list of that division?

ALEX BAND: Not that I know of. Not off the top of my head. I know that the registry data is accurate, and the RIRs attested that but it's not like a public statement that you could download off the RIPE NCC website. That's actually a good idea. Maybe we should have a publication like that.

RUDIGER VOLK: Okay. If it were published ??

GEOFF HUSTON: You'll find on the NRO website a delegated extended stats file which actually lists every single number. And the responsible RIR and the status and that, if you tie it with a similar file we publish which is a republication of IANA that tells you where they gave it to, the two combined give you all of the minority reports. So yes, there is published data and you too can look at it.

RUEDIGER VOLK: Then the question is: Well, okay, how well aligned is your certificate with exactly that data?

ALEX BAND: That's a hundred percent accurate.

AUDIENCE SPEAKER: Just a quick information point on alignment. As also Geoff mentioned they are listed in the delegated stats of HR RIR and at RIPE database every eight hours we import all of these files and if there is any conflict there is notification, but the past three months there has been zero confiction so the space is covered and there's been no conflict between RIRs.

AUDIENCE SPEAKER: Gerard match from NTT. I am wondering, you have got a tool for allowing people to take the routing entry and generate it into a ROA request. Are you also looking at doing a tool to take that ROA request and that and check it against the IRR data at the same time and suggest they clean up their bad objects?

ALEX BAND: Yeah, that is actually something that is on the list that we would like to implement within the RPKI validator. Currently we call it RPKI validator. It's really RPKI centric but of course the IRR data set is also really useful. You can have an attestation in terms of a ROA and if that is backed for example by a route object that comes out of the IRR that would be an additional positive statement about the intended BGP and that is something that we can include in the RPKI validator. But, yeah, whether that is something that will implement in the tool within the foreseeable future it really depends on how much demand there is from the community. But it's definitely something that is on the list, yeah.

AUDIENCE SPEAKER: Excellent. I think that is something that would be valuable to clean up the RIR data, because we have a number of people where you expand what's underneath their AS set or their object and you start to get into tense or hundreds of thousands of routes.

ALEX BAND: And in terms of sort of ROA management, what we would like to do, because currently the RIR data set and the RPKI data set are two things that you have to manage completely separately. So if you change your BGP configuration you have to go into the RPKI management tool and change your ROAs and then you have to go into the RIR and change your route object. What I would like to provide is a single user interface to cover both. And for example, also have a route object matching the ROA automatically. It's all stuff that we can do but we're currently, as far as the implementation goes, we are really focused on making sure that all outer space is eligible for certification. All of the work that we're putting in is really focused on that so finishing the infrastructure and making sure that you have a complete data set in any resource can be signed and can be covered by a ROA, once we finish that and we can do some goal plating with fancy validator features.

AUDIENCE SPEAKER: Richard Barnes. Alex, I wanted to ask one more clarifying questions about how the certificates for minority allocations are arranged. You said these were self?signed certificates. Supposing I wanted to validate these certificates as part of my verification system, am I going to install each of these as a trust anchor or ??

ALEX BAND: I am going to defer this question to Tim, lead programmer on RPKI.

At this moment. No they are not self?signed. We have one self?signed trust anchor which involves the majority /8s that the RIPE NCC manges. Plus all the other space from other RIRs, and we use that to signed the majority certificate for ourselves and the other one for the other RIRs, so there is just one trust anchor involved.

RICHARD BARNES: So you are extending the sop level RIPE NCC certificate to include the minority allocations.

TIM: Yes.

AUDIENCE SPEAKER: Andy Newton again. I just remembered I forgot to ask one very important detail. When you do have the other RIRs, when you reuse the keys to other RIRs to sign the certs, are you going to revoke the current ones that you just signed?

ALEX BAND: I don't know yet. Like I said, that is part of the discussion that needs to happen at the ECG level in the IETF and then once we know the outcome of that we'll just build an implementation.

GEOFF HUSTON: Wouldn't you have to?

ALEX BAND: Yeah exactly, you would have to go for fade sharing.

GEOFF HUSTON: Insofar as if I'm a holder of ERX space in RIPE with a /8 is ARIN, I would logically expect the validation path to come from ?? through ARIN to RIPE to me for that space, which is different from space that I had from RIPE so it's two certificates. Currently in your scheme, I only have one, don't I? Because you have a single trust anchor with everything, so you just give me one certificate, yes?

ALEX BAND: True. And you wouldn't want to end up in a situation where you would have two signatures over a single certificate even if that were possible technically.

GEOFF HUSTON: Well, it is, but that's a different thing. Fair enough.

JOAO DAMAS: Thanks. Next up we have Dennis Walker on some change and modifications, suggested changed of objects that relate to routing.

DENNIS WALKER: Hi, I am Dennis Walker. The business analyst for the Database Group at the RIPE NCC.

Basically I want to talk to you about some upcoming changes to the aut?num object. It's been ?? we have had a suggestion, there is a draft RFC from Job Snijders about adding two new optional attributes to the aut?num object, the input via and the export via. And just to quote from the abstract RFC: "These are using RPSL policy specification to say publish desired routing policy regarding nonadjacent networks."

Now, it's still a draft RFC but we have seen some support on the mailing list that people would actually like us to implement these, even though it is still only a draft RFC.

The new RIPE database software is fully open source. Job has actually modified the database software himself and he has provided a patch to the RIPE NCC. We have looked at this. We are happy with the code he has written and we are ready to implement this change in a test environment. Now, really today I don't want to talk too much about the actual detail of the change. What I'm more concerned with about the fact is that we change the aut?num object syntax. Now, the aut?num object is quite important to routing. This feature actually adds two new optional attributes to this object. Now, according to RFC 2622, which is the standard for RPSL, it says that: "Tools should transparently handle unknown attributes." But do your tools transparently ignore unknown attributes? Is this going to break your tools? And this is the point we really want to get across because we don't want to make a sudden change to this object and find that a lot of you are saying all your updates fail now.

So we just want to make sure that we are getting through to people that we are adding optional attributes, but we are still changing the syntax, so we want to make sure that you are aware of this.

We don't often change the syntax. But when it does, the question I really want to put to you to you is how should we make sure that we have contacted all the people that actually have aut?num objects? We have over 9,000 members, we don't have 9,000 people subscribed to the Routing Working Group mailing list. So, we just want to make sure have we got this right? I have another couple of slides but that's on a slightly different topic so I'm just pause there and see if there are any questions relating to change in the aut?num object syntax?

AUDIENCE SPEAKER: Rudiger: Well, the first time I met Job's proposal, I shot it down immediately and I am very happy that Job figured out a way to come up with something that seems to be immune to the way I shot down the first thing.

I have been discussing this with Job and suggested that couple of places to raise the question to get input and, well, okay, not quite sure how much will come this way and how quick that's going.

However, you are saying well, okay, this is only a draft, just an Internet draft. Does it make sense to actually try to invoke and actively ask developers of relationship software and put the question to the standard body that has devised the RPSL, and well, okay, at least raise the question there ??

DENNIS WALKER: I don't know if Job is in here.

RUDIGER VOLK: No, he isn't unfortunately not. He left before and asked me to look at it closely.

DENNIS WALKER: Maybe then the question, another question I can put to you is: Is there general support for having these new attributes at all?

JOAO DAMAS: Not overwhelming.

AUDIENCE SPEAKER: This is Andrei Polyrakis from GRNET. I have a more general comment here, so I think that the RPSL is very inefficient in describing complex routing policies. My perspective is that we should try to totally replace it with something that is more efficient. From my point of view we shouldn't put an effort in improving RPSL any longer, and the tools that are in RPSL also need to be revised as well. So, I would prefer to put some effort in trying to fix this entirely.

And something else, aut?num object, I hear some, well you are obliged to have a routing policy there and cannot have an aut?num object without a single import and a single export, so most people just put garbage there and maybe there is a better tool to remove this restriction, so maybe it is better not to have anything there than having that. So maybe we should just leave people who really are interested in describing their exact policy there and leave also people who do not want to put anything there, let's not oblige them to put garbage. So, sorry, this is my comment, it is a bit out of context, but it is a bit relevant.

DENNIS WALKER: Just to correct one point you said there. You can, if you choose, an an aut?num object with input and export they are optional attributes. If you have an AS number in a database that acknowledges that the fact that this income has been registered to you and put no policy whatsoever, that sin tactically is correct.

AUDIENCE SPEAKER: I thought you had to put something there. Was it like that in the past?

DENNIS WALKER: For the past 12 years.


JOAO DAMAS: Still, we'll note the kind of general concept of trying to replace RPSL as one possibility. Kurtis.

AUDIENCE SPEAKER: Kurtis Lindqvist, NetNod. Just to answer the question, if the standard is set, they should accept this transparent. You can't go around contacting everyone using this because you don't know who they are. The other standards will get ?? I just, you know, if that's the standard, announce it as you have done here, if that's the decision. I don't see why it should be anything more.

DENNIS WALKER: That's basically what we want to confirm because it's not often we do this or make any change to these objects, so we just want to go the extra mile and make sure we have done everything we need to do, that you are happy with the way we actually change these things, if you think we should change it at all.

JOAO DAMAS: For whatever it's worth, I share Kurtis' view. The standard itself is quite clear about this.

AUDIENCE SPEAKER: Benno: The comments of Andrei as, somebody told you did I pay you to say this? So I have talked also with Job and with someone else, and we are indeed talking with some other people to get what we can use from RPSL and what kind of tool we can do with that to improve the policy and the usability, so please drop by and have a chat.

AUDIENCE SPEAKER: Alexander. Do you really believe that getting these attributes would have any increase on the quality of the data? We have already a number of old attributes and the quality is not very good.

DENNIS WALKER: I think the question of whether the attributes have a value is not quite the same of a question of whether any of the values are accurate. I think those two are separate.

RUDIGER VOLK: Actually Alexander, I am quite sure that the objects that people like you and his clientele would put in there with the extended syntax would be fairly nice and much better than kind of the Everest stuff, because they ask for this because they intend to use it. And using it with bad quality information quite obviously does not work that well, but it kind of the question whether this is a valid extension and it brings any kind of problem with it is kind of a different thing, and well, okay, to clean up my position on that, I do not see a problem at the moment but I would like to see kind of more comments of people who actually look deep into this to confirm. And my understanding is that Job and some other people are asking because they have indeed valid reasons and intentions about this.

JOAO DAMAS: The only thing you can trust on an aut?num is whatever the number is the object when it's first created by RIPE NCC. Everything else is up to everyone to use it however they want. We do need to move on because we are running out of time and there is a couple of more things that Denis needs to cover.

DENNIS WALKER: Just a couple of more general points about routing aut?num objects.

Over the years we have heard lots of comments from people about how complicated it is to get through the authorisation to create a route object. Is it the question of the fact that you need authorisation from both the address space and the AS holder?

So we have taken this on board and we have kind of provisionally developed a process which we want to put on the test environment where one person can submit the route object with the authorisation they have, we will queue it for up to a week waiting for the other person to submit the same object, adding their part of the authorisation. When we see the two halves come together which the full authorisation, we'll then create the object. This avoids having to add ME, it routes all over the place, it also avoids the needs to have passwords added, double BGP signed e?mails passed around, so basically I have got one authorisation, I send it in, you have got the other, you send it in, we match the two up. We think this will be useful but we'll put it on a test environment after the RIPE Meeting, have a look at it, have a play around with it, there is a labs article playing with it in detail, if you think it's useful, we'll deploy it.

The the last slide is a very speculative question. I'd add a disclaimer first that we look at these aut?num objects from the database side. We don't always look at them from the user side. We have noticed that these are very big objects. They are made up of lots and lots of sets of details about peering agreements. These things are updated very often, but usually it's only a very small change you make to this massive object. Many times they are autogenerated by scripts. The question is: Can we manage this information in a better way? With the new software, the management of the data is very easily decoupled from the presentation information. So, internally, we can store this in many different ways, but we can present it to you as RPSL.

So, I'm not going to push any particular ideas, but if, for example, we actually had separate peering agreements, separate objects, which collectively made up your aut?num object, you could manage it in little packets of data when you wanted to get the RPSL information out we just present it to you as an aggregate and it looks exactly the same as it does now. So, it's just an idea.

If anybody has any comments about whether it would be easier to manage this or maybe you have all got it scripted down and you don't care, you wrote this script ten years ago and everything works fine until we change the syntax. So it's just a wild idea if anyone has any comment.

JOAO DAMAS: The pending routing authorisation, I think it's a brilliant idea and simple, and I just wondered why none of us thought about it before.

DENNIS WALKER: Every idea has its time.

JOAO DAMAS: And the second one, I really don't understand what you are getting to, it would be helpful if you could explain with an example. Maybe I'm just slow.

DENNIS WALKER: For example, if you took a one peering agreement, which is like import, export, default, between, say, AS 10 and AS 15, if that was in a separate object which had, it was called AS 10 ?? AS 15 and you put your policy in there and the other guy created an object AS15?AS10, and he put his side of the policy there, and you did that for all of your peering agreements, then, you know, you can manage it ?? you know, you can have a new peering agreement, you create one little object. If you change a peering agreement you change one little object. But when you actually want the information back, we can present it to you as the full aut?num object as it is now which has your full documented policy. It's just ?? maybe that would be easier to manage the actual data rather than the way we present the information to you.

JOAO DAMAS: So like a little subheading that only pertains to that relationship.

AUDIENCE SPEAKER: Carl, RIPE NCC. Just an addition to what Denis said. The thing is in RIPE database, we have 21 object types; 7 of them are from the registry side of the database which is maintained properly in all of that, and 15 of them are from the routing side, or the IRR side. So an aut?num is the one which is actually in between, and that's the only one, and that also causes from the user perspective, for example if an LEA or a user is interested to see who is a holder of a resource and they came back as an aut?num, they also get a lot of routing data so we have objects, I think we have one object which is 5 megabits of text which is really, really hard to understand and they just want to know who is the maintainer or the contact. So that's one of the other reasons behind this separation because routing people they know how to use an IRR and they have tools and all of that but for the registry side this adds a lot of information which many the users are not interested in.

RUDIGER VOLK: Are you suggesting to go the ARIN way?

AUDIENCE SPEAKER: No. I think there is a lot of benefit in having these two together actually because of authorisation and RIPE database also an IRR has a lot of value because there is a link between the authorisation but as Denis suggested maybe we can provide, because internal we can store data separate, maybe we can have just an idea an object called AS number or whatever and that only shows the registration data and we have the peering set and we can keep the aut?num as the legacy object which is internally software automatically puts together current object, but...

RUDIGER VOLK: Kind of, you have been so inventive creating additional flax that could filter out the thing that you are not interested in, let ?? anyway, great, you got exactly my idea of what I was asking about with ARIN.

Denis, have you checked that there is no expression for defining separate peerings in RPSL at the moment? I suspect ?? I haven't checked myself ?? I suspect there is actually very little use object type for that.

DENNIS WALKER: I must admit I haven't checked that, which is possible.

RUDIGER VOLK: Kind of, that aside, I do not think that it is a good idea to have a system where, well, okay, there is the defined output object type, or class,nd you start to create some structuraley different input side that you ask your users to put ?? to manage indirectly what is defined as the actual semantic object. I think that's kind of ?? that's kind of asking for trouble. While I see that yes the intentions are all good for convenience and making things easy, but I think it's not a good way to go.

DENNIS WALKER: Let me just leave you with one last thought. Not just with the aut?num object but maybe thinking ahead with the RIPE database in general. Maybe we should be thinking more about how we manage data and how we present information and data and information don't necessarily have to be the same thing. But this might be a much bigger question, not just the aut?num object.

JOAO DAMAS: Okay. So a quick check says that RPSL indeed defines a peering object. So ?? I mean, the idea might be good, but we'll have to see what it actually would like look.

DENNIS WALKER: It's just an idea for you to think about. So I'll leave it at that.

JOAO DAMAS: Thank you very much, Dennis.

So, last agenda item is any other business? Is there any other business?

No. Then I thank you everyone for being here and participating or just listening and I'll see you all, we will see you all in the next RIPE Meeting in May 2014 in Warsaw.

(Coffee break)