Flow visualization is a vital component in the workflow of studying computational fluid dynamic simulations. Integral curves or streamlines are one of the most commonly used techniques to visualize flow fields and selecting a good set of streamlines is viewed as a challenge. Identifying a representative set of streamlines that captures the flow behavior can be achieved by either strategically placing seed points or selecting a subset of precomputed streamlines that exhibit desired properties.
Supercomputers increase both computing power and available memory. This allows scientists to generate high resolution physics-based simulations. Most of these simulations produce a massive amount of data, resulting in potentially trillions of cells. Scientific visualization is an essential method for understanding this simulation data. Visualization algorithms are usually run on supercomputers to leverage additional memory and computational power.
High performance computing is an important asset to scientific research, enabling the study of phenomena, such as nuclear physics or climate change, that are difficult or impossible to be studied in traditional experiments or allowing researchers to utilize large amounts of data from experiments such as the Large Hadron Collider. No matter the use of HPC, the need for performance is always present; however, the fast-changing nature of computer systems means that software must be continually updated to run efficiently on the newest machines. In this paper, we discuss method
Given the importance of the Internet, it is crucial to assess its key characteristics (e.g. performance, stability, and resiliency) through measurement as it expands and evolves over time. Measuring different characteristics of the Internet is challenging mainly due to its scale and heterogeneity.
The growing complexity and scale of Internet systems coupled with the improved capabilities of Machine Learning (ML) methods in recent years have motivated researchers and practitioners to increasingly rely on these methods for data-driven design and analysis of wide range of problems in network systems such as detecting network attacks, performing resource management, or improving quality of service (QoS).
We explore the application of highly expressive logical and automated reasoning techniques to the analysis of computer programs. We begin with an introduction to formal methods by describing different approaches and the strength of the properties they can guarantee. These range from static analyzers, SMT solvers, deductive program provers, and proof assistants. We then explore applications of formal methods to the analysis of intermediate representations, verification of floating point arithmetic, and fine-grained parallelism such as vectorization.
Driven by massive dataset corpuses and advances and programmability in accelerator architectures, such as GPUs and FPGAs, machine learning (ML) has delivered remarkable, human-like accuracy in tasks such as image recognition, machine translation and speech processing. Although ML has improved accuracy in selected human tasks, the time to train models can range from hours to weeks. Thus, accelerating model training is an important research challenge facing the ML field.
The performance model of an application can provide insight about its runtime behavior on particular hardware. Such information can be analyzed by both developers and researchers for performance tuning. In this talk, we explore different performance modeling techniques and categorize them into three groups: Analytical modeling; empirical modeling and simulation-based modeling.
This study surveys the state-of-the-art research on data-parallel hashing techniques for emerging massively-parallel, many-core GPU architectures.