Area Exam

Scientific Visualization on Supercomputers: A Survey

Supercomputers increase both computing power and available memory. This allows scientists to generate high resolution physics-based simulations. Most of these simulations produce a massive amount of data, resulting in potentially trillions of cells. Scientific visualization is an essential method for understanding this simulation data. Visualization algorithms are usually run on supercomputers to leverage additional memory and computational power.

Understanding the Performance of HPC Applications

High performance computing is an important asset to scientific research, enabling the study of phenomena,  such  as  nuclear  physics  or  climate  change,  that  are  difficult  or  impossible  to  be  studied  in traditional experiments or allowing researchers to utilize large amounts of data from experiments such as the Large Hadron Collider.  No matter the use of HPC, the need for performance is always present; however, the fast-changing nature of computer systems means that software must be continually updated to run efficiently on the newest machines.  In this paper, we discuss method

The Applications of Machine Learning Techniques in Networking

The growing complexity and scale of Internet systems coupled with the improved capabilities of Machine Learning (ML) methods in recent years have motivated researchers and practitioners to increasingly rely on these methods for data-driven design and analysis of wide range of problems in network systems such as detecting network attacks, performing resource management, or improving quality of service (QoS). 

Verification Techniques for Low-Level Programs

We explore the application of highly expressive logical and automated reasoning techniques to the analysis of computer programs. We begin with an introduction to formal methods by describing different approaches and the strength of the properties they can guarantee. These range from static analyzers, SMT solvers, deductive program provers, and proof assistants. We then explore applications of formal methods to the analysis of intermediate representations, verification of floating point arithmetic, and fine-grained parallelism such as vectorization.

Methods for Accelerating Machine Learning in High Performance Computing

Driven by massive dataset corpuses and advances and programmability in accelerator architectures, such as GPUs and FPGAs, machine learning (ML) has delivered remarkable, human-like accuracy in tasks such as image recognition, machine translation and speech processing.  Although ML has improved accuracy in selected human tasks, the time to train models can range from hours to weeks.  Thus, accelerating model training is an important research challenge facing the ML field.

Program Performance Modeling Techniques for High Performance Computing

The performance model of an application can provide insight about its runtime behavior on particular hardware. Such information can be analyzed by both developers and researchers for performance tuning. In this talk, we explore different performance modeling techniques and categorize them into three groups: Analytical modeling; empirical modeling and simulation-based modeling.

Automated Statistical Methods for Parallel Performance Analysis

This paper explores the use of process automation to guide a parallel performance analyst through the knowledge discovery process, while providing the ability to customize the analysis process. For example, the input data can be evaluated to determine the distribution of the data, the standard deviation and/or the prevalence of outliers. Different analytical methods assume or require particular distributions or other data characteristics, and process automation would help prevent the misapplication of inappropriate analytical methods.


Subscribe to RSS - Area Exam