Directed Research Project

Securing ARP From the Bottom Up

The basis for all network communication is the Address Resolution Protocol, which maps IP addresses to a device’s MAC identifier. ARP resolution has long been vulnerable to spoofing and other attacks, and past proposals to secure the protocol have focused on key owner- ship rather than the identity of the machine itself. This paper introduces arpsec, a secure ARP protocol that is based on host attestations of their integrity state.

Lazy Functions as Processes

CPS transforms have long been important tools in the study of programming languages, especially those related to the λ-calculus. Recently, it has been shown that encodings into process calculi, such as the π-calculus, can also serve as semantics, in the same way as CPS transforms. It is known that common encodings of the call-by-value and call-by-name λ-calculi into the π-calculus can be seen as CPS transforms composed with a naming transform that expresses sharing of values. We review this analysis and extend it to call-by-need.

Reuse It or Lose It: More Efficient Secure Computation Through Reuse of Encrypted Values

Two-party secure-function evaluation (SFE) has become significantly more feasible, even on resource-constrained devices, because of advances in server-aided computation systems. However, there are still bottlenecks, particularly in the input-validation stage of a computation. Moreover, SFE research has not yet devoted sufficient attention to the important problem of retaining state after a computation has been performed so that expensive processing does not have to be repeated if a similar computation is done again.

Linux Provenance Modules: Secure Provenance Collection for the Linux Kernel

In spite of a growing interest in provenance-aware systems, mechanisms for automated provenance collection have failed to win acceptance in mainstream operating systems. This is due in part to a lack of consensus within disparate provenance development communities on a single general solution -- provenance collection mechanisms have been proposed at a variety of operational layers wthin host systems, collecting metadata at a variety of scopes and granularities.

Multi-Target Autotuning for Accelerators

Considerable computational resources are available on GPUs and other accelerator devices, the use of which can offer dramatic increases in performance over traditional CPUs. However, programming such devices can be difficult, especially given considerable architectural differences between different models of accelerators. The OpenCL language provides portable code, but does not provide performance-portability: code which is optimized to run well on one device will run poorly on others.

Performance Optimizations of the Tensor Contraction Engine in NWChem

In order to understand the most fundamental properties of chemical reactions, it is necessary to simulate the electronic structure of molecular systems from first principles. These simulations are infamous for being among the most computationally and memory-intensive scientific applications of all. The parallel performance of quantum chemistry software frameworks is therefore of utmost importance. This talk considers two large-scale optimizations of the NWChem computational chemistry package.

Ray Tracing Within A Data Parallel Framework

Current architectural trends on supercomputers have dramatic increases in the number of cores and available computational power per die, but this power is increasingly difficult for programmers to harness effectively. High-level language constructs can simplify programming many-core devices, but this ease comes with a potential loss of processing power, particularly for cross-platform constructs. Recently, scientific visualization packages have embraced language constructs centering around data parallelism, with familiar operators such as map, reduce, gather, and scatter.

Evaluating the Efficacy of Wavelet Compression for Turbulent-Flow Data

We explore the ramifications of using wavelet compression on turbulent-flow data from scientific simulations. As upcoming I/O constraints may significantly hamper the ability of scientific simulations to write full-resolution data to disk, we feel this study enhances the the understanding of exascale scientists with respect to potentially applying wavelets in situ. Our approach repeats existing analyses with wavelet-compressed data, using evaluations that are quantitatively based. The data sets we select are large, including one with a 4,096 cubed grid.

Pages

Subscribe to RSS - Directed Research Project