Dissertation Defense

Systemwide Power Management Targeting Early Hardware Over-provisioned High Performance Computers

High Performance Computing (HPC) is an important enabling technology for big science, supporting simulation of phenomena and exploration of data sets that would be intractable to complete manually. The computational power of an HPC system is given by the number of floating point operations completed per second (FLOPS). Current generation HPC system are capable of 1015 FLOPS and over the past 7 years there has been significant effort to design a practical HPC system capable of achieving 1018 FLOPS.

Insightful Performance Analysis of Many-Task Runtimes through Tool-Runtime Integration

Future supercomputers will require application developers to expose much more parallelism than current applications expose. In order to assist application developers in structuring their applications such that this is possible, new programming models and libraries are emerging, the many-task runtimes, to allow for the expression of orders of magnitude more parallelism than currently existing models.

Robust Large Margin Approaches for Machine Learning in Adversarial Settings

Machine learning algorithms are invented to learn from data and to use data to perform predictions and analyses. Many agencies are now using machine learning algorithms to present services and to perform tasks that used to be done by humans. These services and tasks include making high-stake decisions. Determining the right decision strongly relies on the correctness of the input data. This fact provides a tempting incentive for criminals to try to deceive machine learning algorithms by manipulating the data that is fed to the algorithms.

Sequent Calculus: A Logic and a Language for Computation and Duality

Truth and falsehood, questions and answers, construction and deconstruction; as Alcmaeon (510BC) once said, most things come in dual pairs. Duality is a guiding force, a mirror that reveals the new from the old via opposition. This idea appears pervasively in logic, where duality is expressed by negation that inverts "true" with "false" and "and" with "or." However, even though the theory of programming languages is closely connected to logic, this kind of strong duality is not so apparent in the practice of programming.

Learning Tractable Graphical Models

Probabilistic graphical models have been successfully applied to a wide variety of fields such as computational biology, computer vision, natural language processing, robotics, and many more. However, in probabilistic models for many real-world domains, exact inference is intractable, and approximate inference may be inaccurate. In this talk, we discuss how we can learn tractable models such as arithmetic circuits (ACs) and sum-product networks (SPNs), in which marginal and conditional queries can be answered efficiently.

High Performance Computational Chemistry: Bridging Quantum Mechanics, Molecular Dynamics and Coarse-Grained Models

The past several decades have witnessed tremendous strides in the capabilities of computational chemistry simulations, driven in large part by the extensive parallelism offered by powerful computer clusters and scalable programming methods in high performance computing (HPC). However, such massively parallel simulations increasingly require more complicated software to achieve good performance across the vastly diverse ecosystem of modern heterogeneous computer systems.

Performance Modeling of In Situ Rendering

With the push to exascale, in situ visualization and analysis will play an increasingly important role in high performance computing. Tightly coupling in situ visualization with simulations constrains resources for both, and these constraints force a complex balance of trade-offs. A performance model that provides an a priori answer for the cost of using an in situ approach for a given task would assist in managing the trade-offs between simulation and visualization resources.

Measurement-Based Characterization of Large-Scale Networked Systems

As the Internet has grown to represent arguably the largest "engineered" system on earth, network researchers have shown increasing interest in measuring this large-scale networked system. In the process, structures such as the physical Internet or the many different (logical) overlay networks that this physical infrastructure enables (e.g., WWW, Twitter, Facebook) have been the focus of numerous studies. Many of these studies have been fueled by the ease of access to "big data".

Pages

Subscribe to RSS - Dissertation Defense