Friday, June 14, 2013

LOPSA East Class reviews - IPv6 & DNSSEC

I just received the reviews and attendee feedback for the IPv6 and DNSSEC classes I taught at the recent LOPSA-East conference. So far my recent stint as a technical course instructor at various conferences has been going well. Students are generally very pleased with the courses, and the positive feedback often results in invitations to teach at other venues.

The DNSSEC class is new. At past conferences, I've taught a combined DNS and DNSSEC class. But I've received feedback that many folks would like to see a course focussed on DNSSEC, so I created one. I also incorporated some live demos of setting up DNSSEC, which attendees found to be very useful.

I'll most likely be teaching these classes again at the USENIX LISA conference in Washington, DC later this year.

The possible responses for each question in the feedback survey are "Unsatisfactory", "Missed Some Expectations", "Met Expectations", "Exceeded Expectations", and "Greatly Exceeded Expectations". The data below is only for the (small) subset of the class that offered feedback of course.

IPv6 Course Feedback


==> SA1: Using and Migrating to IPv6 / Huque

Rate this training session: [Description matched the contents of the class]
    * Greatly Exceeded Expectations
    * Exceeded Expectations
    * Greatly Exceeded Expectations
    * Met Expectations
    * Met Expectations
    * Greatly Exceeded Expectations
    * Exceeded Expectations
    * Greatly Exceeded Expectations

Rate this training session: [Class material was useful to my job]
    * Greatly Exceeded Expectations
    * Met Expectations
    * Greatly Exceeded Expectations
    * Met Expectations
    * Met Expectations
    * Exceeded Expectations
    * Exceeded Expectations
    * Greatly Exceeded Expectations

Rate this training session: [Instructor was knowledgeable]
    * Greatly Exceeded Expectations
    * Greatly Exceeded Expectations
    * Greatly Exceeded Expectations
    * Exceeded Expectations
    * Greatly Exceeded Expectations
    * Greatly Exceeded Expectations
    * Greatly Exceeded Expectations
    * Greatly Exceeded Expectations

Rate this training session: [Instructor was able to answer students questions]
    * Greatly Exceeded Expectations
    * Greatly Exceeded Expectations
    * Greatly Exceeded Expectations
    * Greatly Exceeded Expectations
    * Greatly Exceeded Expectations
    * Greatly Exceeded Expectations
    * Greatly Exceeded Expectations
    * Greatly Exceeded Expectations

Rate this training session: [Course material quality]
    * Greatly Exceeded Expectations
    * Exceeded Expectations
    * Greatly Exceeded Expectations
    * Exceeded Expectations
    * Greatly Exceeded Expectations
    * Greatly Exceeded Expectations
    * Exceeded Expectations
    * Greatly Exceeded Expectations

What was the single BEST part of this class?
    * good material, awesome presenter
    * Excellent balance of technical detail with beginner introduction
    * instructor clear experience and ability to communicate topic
    * He has done the work, and could provide many answer from experience
    * Quality of the instructor - good speaker and very knowledgable.
    *
    * The instructor was able to explain a very complex topic clearly & in terms that are directly applicable to my future use of the material.
    * Shumon's depth of knowledge; he adapted what we covered and how fast we were going, on the fly! AWESOME instructor.

Name one aspect of this class that NEEDS IMPROVMENT?
    * needs to be longer
    *
    * wants more time to work in.
    * pacing, which material to emphasis. I thought the first part of the material could have been covered a bit faster, and more time on the meatier issues
    * The class seemed to detail a lot of 'differences between ipv6 and ipv4' and protocol internals in favor of 'how do I actually deal with migration issues'. A better mix would be nice, but I understand that unless you know of the differences, it can be hard to concentrate on implementation.
    *
    * Access to a Lab for a demo might add to the session.
    * nothing, run this class again. MAYBE, talk him into making another class focused on migrating your organization from v4 to v6... so people can do 'intro to ipv6' if they need, then another session on migration strategies.

Should LOPSA offer this class in the furture?
    * Yes
    * Yes
    * Yes
    * Yes
    * Yes
    * Yes
    * Yes
    * Yes

DNSSEC Course Feedback


==> SA4: DNSSEC (DNS Security Extensions) / Huque

Rate this training session: [Description matched the contents of the class]
    * Met Expectations
    * Exceeded Expectations
    * Greatly Exceeded Expectations
    * Exceeded Expectations
    * Greatly Exceeded Expectations

Rate this training session: [Class material was useful to my job]
    * Met Expectations
    * Greatly Exceeded Expectations
    * Greatly Exceeded Expectations
    * Exceeded Expectations
    * Greatly Exceeded Expectations

Rate this training session: [Instructor was knowledgeable]
    * Met Expectations
    * Greatly Exceeded Expectations
    * Greatly Exceeded Expectations
    * Greatly Exceeded Expectations
    * Greatly Exceeded Expectations

Rate this training session: [Instructor was able to answer students questions]
    * Met Expectations
    * Greatly Exceeded Expectations
    * Greatly Exceeded Expectations
    * Greatly Exceeded Expectations
    * Greatly Exceeded Expectations

Rate this training session: [Course material quality]
    * Met Expectations
    * Missed Some Expectations
    * Greatly Exceeded Expectations
    * Exceeded Expectations
    * Greatly Exceeded Expectations

What was the single BEST part of this class?
    * DNSSEC appears do-able
    * My impression of DNSSEC went from theoretically possible to practical in a very short time. Something that is alluded me for quite some time.
    *
    * Best part was seeing the live application of theory in an enterprise environment.
    * Shumon's depth of knowledge; he adapted what we covered and how fast we were going, on the fly! AWESOME instructor. He's so good, we had wonky wifi, and he just ran demos through Bind on his Mac. nice.

Name one aspect of this class that NEEDS IMPROVMENT?
    *
    * Materials were not but will be posted to the presenters site
    *
    * nothing negative to say.
    * nothing, don't change anything, run it again!

Should LOPSA offer this class in the furture?
    * Yes
    * Yes
    * Yes
    * Yes
    * Yes

Monday, June 10, 2013

100 Gigabit Ethernet at Penn

This summer, the University of Pennsylvania is upgrading its campus core routing equipment (in fact we're in the midst of this upgrade right now). This is basically an upgrade to the set of large routers that form the center of our network.

The current core topology consists of 5 core routers (and also 2 border routers) interconnected by two independent layer-2 switched 10 Gigabit Ethernet networks. Each of the core routers is located in one of five geographically distributed machine rooms across the campus. A rough diagram is shown below.

This diagram also shows the current external connections to/from the campus network - we have three links (each 10 Gigabit Ethernet) to Internet Service Providers (ISPs). And two connections to MAGPI (the regional Internet2 GigaPoP operated by us), via which we access a 10 Gigabit Ethernet connection to Internet2. The Internet2 connection is shared amongst Penn and other MAGPI customers, which are mostly Research & Education institutions in the geographic area.


The core interconnect is being upgraded to 100 Gigabit Ethernet (a ten-fold increase in link bandwidth). It would be cost prohibitive to fully replicate the current design in 100 Gig (since this equipment is still very expensive) so the interconnect design has been adjusted a bit. Instead of two layer-2 switch fabrics interconnecting the routers, we are deploying the core routers connected in a 100 Gig ring (see the diagram below). When the final design is fully implemented, each core router will have a 10 Gig connection into each of the border routers (this will require some additional upgrades to the border routers, which are expected to happen later this year). The topology redesign has fewer links, and in the final count (summing the bandwidth of all the core facing links), the new core will have about 5 times the aggregate bandwidth of the old one. The maximum (shortest path) edge to edge diameter of the network increases by one routing hop.

100 Gigabit Ethernet is today's state of the art in transmission speed. The next jump up will likely be 400 Gigabit Ethernet for which the IEEE already has a study group launched and several preliminary designs under consideration.



Not depicted in this diagram is the rest of the network towards the end systems. Below the layer of core routers, are smaller routers located at the 200 or so buildings scattered around campus. Each building router is connected to two of the core routers. The building routers feed wiring closets inside the building, which house layer-2 switches that network wallplates are connected to.

In the process of the upgrade, we are also changing router vendors. The current core routers, Cisco 7609 series routers with Sup7203BXL supervisor engines, have served us well. They were originally deployed in the summer of 2005, and have been in operation well past their expected lifetime.

As is our practice, we issued an RFI/P (Request for Information/Purchase) detailing our technical requirements for the next generation routers and solicited responses from the usual suspects, selecting a few vendors whose equipment we bring in for lab testing, followed by a selection.

The product we've selected is the Brocade MLXe series router, specifically the MLXe-16 - this router can support 16 half height, or 8 full height (or a mixture of full/half) line cards, as well as redundant management and switch fabric modules.

A product description of the MLXe series is available at:
http://www.brocade.com/downloads/documents/data_sheets/product_data_sheets/MLX_Series_DS.pdf

The photo below is one of the routers (prior to deployment) in the Vagelos node room (one of 5 machine rooms distributed around campus where we house critical networking equipment and servers).Going from left to right, this chassis has two management modules, one 2-port 100 Gigabit Ethernet card, six 8-port 10 Gigabit Ethernet cards, four switch fabric modules, two more 8-port 10 Gigabit Ethernet cards, and three 24-port Gigabit Ethernet cards.

One of these routers was deployed in production last week. The rest should be up and running by the end of this month or by early July.

(The full set of photos can be seen here on Google Plus)




Shown below is the 2-port 100 Gigabit Ethernet card, partially inserted into the chassis, showing the CFP optical transceiver modules attached.



Unlike preceding generations of ethernet, with 100 Gigabit Ethernet, the transmission technology uses multiple wavelengths in parallel (although there are parallel fiber implementations also). The current IEEE specifications (802.3ba) specify four lanes of 25Gbps. However a number of key vendors in the industry, including Brocade, formed the MSA (Multi Source Agreement) and designed and built a 10x10 (10 lanes of 10Gbps) mechanism of doing 100 Gig, at much lower cost than 4x25Gbps, operating over single mode fiber at distances of 2, 4, or 10km. This is called LR-10 and uses the CFP (C Form factor pluggable) media type.

Pictured below (left) is a Brocade LR10 100 Gigabit Ethernet CFP optical module installed in 100 Gig line card with a single mode fiber connection (LC). On the right is an LR10 CFP module taken out of the router.




Close up of the 8-port 10 Gigabit Ethernet module, and several 24-port Gigabit Ethernet modules. To connect cables to them, we need to install small form factor pluggable transceivers into them, SFP+ for the 10 gig, and SFP for the 1 gig.



Pictured below is one of the five Cisco 7609 routers that will be replaced.




One of the Penn campus border routers, a Juniper M120, is shown below. This is also scheduled to be upgraded in the near future to accommodate 100 Gig and higher density 10 Gig, although the product has not yet been selected/finalized.




Below: A Ciena dense wavelength division multiplexer (DWDM). Penn uses leased metropolitan fiber to reach equipment and carriers in 401 North Broad street, the major carrier hotel in downtown Philadelphia. We have DWDM equipment placed both at the campus and the carrier hotel to carry a mixture of 1 and 10 Gig circuits and connections across this fiber for various purposes. This equipment is also scheduled to be upgraded to allow us to provision 100 Gigabit Ethernet wavelengths between the campus and the carrier hotel.




High Performance Networking for Researchers


Penn is a participant in the National Science Foundation (NSF) funded DYNES (Dynamic Network Systems) project, which provides high bandwidth dedicated point to point circuits between (typically) research labs for specialized applications. Popular uses of this infrastructure today include high energy physics researchers obtaining data from the LHC and other particle accelerator labs, and various NSF GENI network research projects.

Earlier this year, we completed a grant application for the  NSF "Campus Cyberinfrastructure - Network Infrastructure and Engineering (CC-NIE)" program. I spent a large amount of time in March of this year with several Penn colleagues in preparing the application. If Penn does win an award (we'll find out later this year), we will be deploying additional dedicated network infrastructure for campus researchers, bypassing the campus core and with 100 Gbps connectivity out to the Internet2 R&E network. A rough diagram of how this will look is below.



Software Defined Networking


There's a huge amount of buzz about Software Defined Networking (SDN) in the networking industry today, and a number of universities are investigating SDN enabled equipment for deployment in their networks. Of the big router vendors, Brocade does appear to have one of the better SDN/openflow stories thus far. The MLXe series already supports an early version of Openflow (the portion of SDN that allows forwarding tables of switches/routers to be programmed by an external SDN controller).

Penn is building an SDN testbed in our network engineering lab, primarily to investigate its capabilities. For us, SDN is still largely a solution in search of a problem. We run a very simple network by design, whose primary purpose is connectivity and high performance packet delivery. The most probable use case in our future, virtualization of the network, is likely better achieved with a proven technology like MPLS first. But we'll keep an eye on SDN and its evolution. We do want to support research uses of SDN though. Several faculty members in the Computer Science department are interested in SDN, and the NSF CC-NIE grant will allow us to build some SDN enabled network infrastructure separate from the core production network to accommodate their work.

-- Shumon Huque

Sunday, June 2, 2013

Former Student to Caltech and CERN

In addition to my full time job, I sometimes teach a course in Penn's Engineering School - specifically a Telecom Lab course on network protocols (mostly routing). The material covered in the course includes interior routing protocols like RIP, OSPF, and IS-IS; BGP for exterior routing; Multicast Routing; and MPLS (traffic engineering, Layer-2 and Layer-3 VPNs). I also cover the DNS and DHCP protocols in a fair amount of detail. All the lab assignments use both IPv4 and IPv6 extensively.

Indira is one of my former students, and also served as my teaching assistant the semester following the one during which she took the class. She just graduated (Masters Degree in Telecommunications) and is moving to Geneva, Switzerland for a full time job. She stopped in to see me last week on the day she was leaving and we snapped a few photos together.



The job is a full time network engineer position with Caltech, but based at CERN in Switzerland, working with some Caltech colleagues I know (Artur Barczyk, Azher Mughal, Harvey Newman) from Internet2 and various other R&E conferences. Among other things, Caltech is involved in operating USLHCNet, a very high speed network that provides transatlantic connectivity between computing facilities at CERN's  Large Hadron Collider (LHC) and computing facilities at major US particle accelerator labs like Fermilab and Brookhaven.

Harvey was also one of the principal investigators for the NSF funded DYNES (Dynamic Network System) project, of which Penn is a participant. DYNES, via the Internet2 network infrastructure, provides dynamically allocated dedicated point to point circuits between remote research labs (a common endpoint is the LHC). Penn researchers were involved in building the LHC's ATLAS detector, and played a role in the Higgs particle discovery last year.

Congratulations and good luck to Indira - I'm sure she has a bright and productive career ahead of her. She left me a small gift - a box of chocolates from Kazakhstan, pictured below. The box is almost too attractive to open, so I haven't yet.




Another talented former student and Teaching Assistant, Sangeetha had been working for me part time in Penn's Networking department on various projects. She also recently left (due to her advisor leaving Penn), transferring into the Ph.D program (CS) at the UIUC.

I'm planning to cancel my teaching appointment, since I don't really have the time for it, given the demands of my full time job. I originally decided to teach the course as a favor to my colleague, Roch Guerin, but it's actually been quite a fun and rewarding experience for me (and hopefully my students). However, now that Roch is leaving Penn to take the CS department chair position at the University of Washington at St. Louis, it looks like Penn is winding down the TCOM program anyway - a good time for me to make an exit.

I'm planning to write a more detailed blog post about my teaching experience at Penn. More on that later.

--Shumon Huque.