Colloquium

Measurement and Analysis of Internet Congestion

The relentless growth of Internet traffic demands and the growing concentration of content across a few providers have led to capacity issues, which in turn have resulted in high-profile disputes over who should pay for additional capacity at points of interconnection between content providers, transit providers, and access ISPs. The resulting potentially contentious interactions among providers have implications for network stability and performance, leaving the congested link as an externality for all users of the link until the dispute is resolved. This situation has led to recent interest in technical, regulatory, and policy circles in techniques to better understand the nature and location of congestion on the Internet.

In this talk I will present recent work from our Measurement and ANalysis of Internet Congestion (MANIC) project. I will describe a measurement method and system we have developed to map and continuously monitor thousands of interconnection links between networks, with the goal of producing a “congestion heat map” of the Internet. I will present a case study of using our measurements to characterize congestion at African IXPs. Finally, I will briefly discuss our work on using TCP connection statistics to infer whether a TCP flow “self-induced” congestion (e.g., by filling up its access link) or was limited by an already congested link on the end-to-end path (e.g., a congested interconnection link).

Security and Privacy Grand Challenges for the Internet of Things

The growth of the Internet of Things (IoT) is driven by market pressures, and while security is being considered, the security and privacy consequences of billions of such devices connecting to the Internet cannot be easily conceived of. The possibilities for unintended surveillance through lifestyle analysis, unauthorized access to information, and new attack vectors will continue to increase by 2020, when up to 50 billion devices may be connected. This talk summarizes our recent papers on the various kinds of vulnerabilities that can be expected to arise, and presents a research agenda for mitigating the worst of the impacts. We hope to explain the potential dangers of IoT and highlight the research opportunities in the areas of security and privacy that IoT presents.

Privacy Preserving User Profiling Using Net2Vec

We present Net2Vec, a flexible high-performance platform that allows the execution of deep learning algorithms in the communication network. Net2Vec is able to capture data from the network at more than 60Gbps, transform it into meaningful tuples and apply predictions over the tuplesin real time. This platform can be used for different purposes ranging from traffic classification to network performance analysis. Finally, we showcasethe use of Net2Vec by implementing and testing a solution able to profile network users at line rate using traces coming from a real network. We show that the use of deep learning for this case outperforms the baseline method both in terms of accuracy and performance.

Transitioning Crowd-Powered Systems to AI

Over the past few years, Dr. Bigham has been developing and deploying interactive crowd-powered systems that solve characteristic "hard" problems in computer science by combining human and machine computation. For instance, VizWiz answers visual questions for blind people in seconds, Legion drives robots in response to natural language commands, Chorus holds helpful general conversations with human partners, and Scribe robustly converts streaming speech to text in less than five seconds.

Interval Graph Completion and Polynomial-Time Preprocessing

This talk will start by arguing that the complexity class FPT can be used to capture the notion of polynomial-time preprocessing to reduce input size. This is followed by an FPT algorithm with runtime $O(n^{2k}n^{3}m)$ for the following NP-complete problem [GT35 in Garey&Johnson]: Given an arbitrary graph G on n vertices and m edges, can we obtain an interval graph by adding at most k edges to G? The given algorithm answers a question first posed by Kaplan, Shamir and Tarjan in 1994.

Verification and Validation Techniques for Scientific Simulations

With the advent of high-fidelity simulations of complex physical phenomena made possible with modern high performance computing platforms, the problem of verifying and validating simulations has become a task with complexity comparable to the simulation itself. Traditional methods based on the L2-norm and manual inspection (commonly known as the 'viewgraph-norm') have reached their limits of usefulness. In particular, naive quantitative methods fail to track with domain-expert opinion.

Internet Topology Measurements for the OneLab Testbed

The European Commission's OneLab project centers around the PlanetLab testbed, which allows networking and distributed systems researchers to deploy experiments at a global scale with a degree of realism unmatched by other testbeds. OneLab extends, deepens, and federates PlanetLab: extending PlanetLab into environments, such as wireless, beyond the traditional wired internet; deepening the ability of the system to monitor the underlying network; and creating a PlanetLab Europe that is federated with PlanetLab in the United States.

Experimental Mathematics and High-Performance Computing

The field of high-performance computing has been very successful in enabling an ever-growing number of important scientific applications to be performed on high-end computer systems. Hardware advances, algorithm improvements, parallelization techniques, performance tools and visualization have all played a part. Recently this technology has been applied in novel ways to research problems in mathematics and mathematical physics.

Representing and Reasoning with Modular Ontologies

Semantic Web applications call for the knowledge base counterparts of hyperlinked documents so that independently created ontology modules can be flexibly linked and reused to meet the needs of specific applications, users, or contexts. We argue that ontology languages and tools should support selective knowledge sharing, collaborative ontology construction, accommodate reconciliation of multiple points of view about the world, and provide distributed reasoning services based on well-defined semantics and a sound proof theory.

Pages

Subscribe to RSS - Colloquium