Probabilistic graphical models have been successfully applied to a wide variety of fields such as computational biology and computer vision. However, for large scale problems represented using unrestricted complex models, the exact probabilistic inference is often intractable, which motivates many researchers to address this problem. One direction is to learn tractable models which offer efficient inference. Tractable models benefit from graph-theoretic properties, like bounded treewidth, or other structural properties such as context-specific independence and determinism.
Artificial intelligence and machine learning research is dedicated to building intelligent artifacts that can imitate or even transcend the cognitive abilities of human beings. To emulate human cognitive abilities with intelligent artifacts, one must first render machines capable of capturing critical aspects of sensory data, with adequate data representations and performing reasoning and inference with formal knowledge representation.
Routing is a key component toward building an interconnected network architecture. There are intra-domain and inter-domain routing protocol. The de facto inter-domain routing protocol, Border Gateway Protocol, has experienced increasingly frequent anomalous incidents, such as IP prefix hijackings, route leaks, or large-scale disruptive routing events. The recent intra-domain routing scheme, i.e. software-defined networking (SDN) and OpenFlow protocol, also show numerous security weaknesses.
There are 3 primary theoretical models used to simulate atoms and molecules with computers. In decreasing order of their computational cost, these models are quantum mechanics (QM), molecular mechanics (MM), and coarse-grained (CG). Computational chemistry research literature is rife with examples that utilize high performance computing (HPC) to scale these models to large and relevant problems at each individual scale. However, the grand challenge lies in effectively bridging these scales, both spatially and temporally, to study richer chemical models that go beyond single-scale physics.
Compilers use many different intermediate languages (ILs) for optimizing code. The best IL depends on the source and target languages, the compilation phase, and which optimizations are to be performed. In this talk, we will explore several common ILs, including static single-assignment (SSA) form and continuation-passing style (CPS), and present a new language being implemented for the Glasgow Haskell Compiler (GHC) that is both lightweight and powerful.
We examine techniques for scalable engineering of specifications. Refinement builds more detailed specifications out of more abstract one. Composition combines component specifications to specify complete systems. We examine the manifestation of these ideas in various settings including temporal logic as well as considering a framework for thinking about refinement and composition based on category theory.
A diverse set of goals drive rendering algorithm research. The gaming industry, hardware manufacturers, and the film industry all contribute to rendering research. The field of scientific visualization uses methods from both as well developing its own. However, a portion of these algorithms may no longer be applicable as we move to exascale. The I/O gap is forcing scientific visualization to in situ based systems, and the in situ environment, at scale, is one of limited resources (i.e., time and memory).
Parallel programming is difficult. Switching from sequential to parallel programming introduces entire new classes of errors for the programmer to make, such as deadlock and race conditions, which are difficult to debug and complicate testing and correctness proofs. Yet there are entire classes of programs with computational demands so great that sequential solutions are infeasible. We do parallel programming because we care about performance.
Large scale computing platforms are increasingly important to society. Operating these systems can be extremely expensive for the owning organization and the spending must be justified in terms of the organization’s objectives. This presentation looks at three CS topic areas that are important for building future automated system management solutions to aid in meeting organizational objectives within imposed constraints. The discussion is motivated by power constraints for future HPC systems.
Many real-world problems are represented by using graphs. For example, given a graph of a chemical compound, we want do determine whether it causes a gene mutation or not. As another example, given a graph of a social network, we want to predict a potential friendship that does not exist but it is likely to appear soon. Many of these questions can be answered by using machine learning methods if we have vector representations of the inputs, which are either graphs or vertices, depending on the problem.