Bob Johnson from Duke is introducing the Global Networking panel – we’re all faced with the same networking issues. Bandwidth availability, political restrictions, latency, jitter. Asia Pacific activities underway – Duke has a medical school in Singapore, building a million square foot campus in Shanghai, NYU has presence in Shanghai, Chicago has a presence in Singapore and Beijing. NYU is opening a presence in Sidney.
Working on building an International Network Exchange Point and Co-location site. – provide common point for connection of regional R&E networks. Benefits are cost savings, building on R&E networks instead of leased lines to the US. Co-lo space is just cost sharing. Having a neutral location for data storage, hosting computer services (lower latency).
Where to put this? Reviewed three sites. The best ended up being TATA in Singapore.
Dale from Internet2 goes over the process and the support services required. There will be a telepresence and HD video exchange under the Internet2 Commons. Network Performance monitoring will be available. Multiple functions for this facility: Co-location (initially capable of 10 racks, which can grow); Layer 3 capability; an instance of an Advance Layer 2 Services exchange – support OpenFlow, SDN, Dynamic Layer 2 circuits; Exchange will operate as a GLIF Open Lightpath Exchange; Essentially policy free – if you can get a circuit in and pay the fees, you’re welcome.
Some sites might bring in their own address space and router, others might use shared space. 1 Gb physical link to commodity Internet. Initially provision 200 meg on 1 gig circuit (with some burst capabilities). 1 Gb link to global switching building for peering – TEIN3, Gloriad (which ends up in Seattle). 1 gig link to Hong Kong light to meet CERNET and CSTnet.
Timelines: Sept 14 Tat agreements to be signed and in place and equipment ordered; Nov 1 – equipment delivered to Singapore; Dec 15- everything in place to beging testing; Jan 1 – fully operational.
Kitty – what’s worked and what hasn’t?
NYU has been testing, focusing on latency and user experience – acceptable, tolerable, or frustrating.
Common issues – network bandwidth, amount and does it match contracted bandwidth? Response times are highly variable, Some apps aren’t tuned for latency. Latencies range from around 80 ms to over 300 ms depending on sites. Focused on two forms of testing/monitoring – latency simulator and actual testing from different locations. Implemented a tool to understand user experience for web-based applications.
Implemented a long distance performance simulatore to create profiles. Implemented a tool call TrueSight that’s a Web App performance tool – allows clear understanding of what happens in a web app. An appliance connected to the F5 Span port – captures all the traffic and analyzes. Performance metrics of http and https web traffic. Able to track usage over time then drill down into specific sessions. Service leads get daily or weekly reports. Anonymized data being moved to data warehouse for trend analysis.
Remediation – optimizing webpages, applications; tuning network; WAN acceleration
It’s hard for app builders and owners to think of applications this way. Network folks haven’t really understood how applications perform on networks. Most app builders assume their users are on the LAN, not across the world.
Aspire to do testing before going live, setting watch points on end-user app tool to watch how performance is doing. Working with cloud vendors on how they test instances before selecting.