Computational simulations frequently only save a subset of their time slices, e.g., running for one thousand cycles, but saving only fifty time slices. With this work we consider the problem of temporal upscaling, i.e. inferring visualizations at time slices that were not saved, as applied to ensemble simulations. We contribute a new algorithm, which we call DATUM, which incorporates machine learning techniques, specifically, dotted attention and convolutional networks. To evaluate our approach, we conduct 1327 experiments, on 32x32 pixel renderings of two-dimensional data sets.
Directed Research Project
Network alignment (NA) consists on finding the optimal node correspondence between distinct networks (graphs). Previous works in this field have had various degrees of success. However, they rely on some strong assumptions of topological and/or attribute consistency among the aligned networks. Simultaneously, Generative Adversarial Networks (GANs), generative models that have achieved remarkable results on continuous data such as images and audio, have recently been successfully applied to tasks with discrete domains, such as text generation.
We present performance results from a new hybridized finite difference method for the spatial discretization of partial differential equations. The method is based on the standard Summation-By-Parts method with weak enforcement of boundary and interface conditions through the Simultaneous-Approximation-Term. We analyze the performance when applying the hybrid method to Poisson's equation which arises in many steady-state physical problems, focusing on an application in Earth science.
Researchers in the humanities are among the many who are now exploring the world of big data. They have begun to use programming languages like Python or R and their corresponding libraries to manipulate large data sets and discover brand new insights. One of the major hurdles that still exists is incorporating visualizations of this data into their projects. Visualization libraries can be difficult to learn how to use, even for those with formal training.
The success of deep neural networks is clouded by two issues: (1) a vulnerability to adversarial examples and (2) a tendency to be uninterpretable. Interestingly, recent empirical evidence in the literature as well as theoretical analysis on simple models suggest these two seemingly disparate issues are actually connected. In particular, robust models tend to be more interpretable than non-robust models. In this paper, we provide evidence for the claim that this relationship is bidirectional.
Unplanned intensive care units (ICU) readmissions and in-hospital mortality of patients are two important metrics for evaluating the quality of hospital care. Identifying patients with higher risk of readmission to ICU or of mortality can not only protect those patients from potential dangers, but also reduce the high costs of healthcare. In this work, we propose a new method to incorporate information from the Electronic Health Records (EHRs) of patients and utilize hyperbolic embeddings of a medical ontology (i.e., ICD-9) in the prediction model.
Modern network telemetry systems rely on programmable switches to perform the required operations within the data plane in order to scale with the rate of network traffic. These systems create a stream processing pipeline for all telemetry operations and statically map a subset of operations to switch resources. There are two inherent restrictions in this approach: first, the fraction of operations in the data plane decreases with the number and complexity of telemetry tasks.
While both Internet Service Providers (ISPs) and third-party Security Service Providers (SSPs) offer Distributed Denial-of-Service (DDoS) mitigation services through cloud-based scrubbing centers, it is often beneficial for ISPs to outsource part of the traffic scrubbing to SSPs to achieve less economic cost and better network performance. To explore this potential, we design an online auction mechanism, featured by the challenge of the switching cost of using different winning bids over time.
The existing event classification (EC) work primarily focuses on the traditional supervised learning setting in which models are unable to extract event mentions of new/unseen event types. Few-shot learning has not been investigated in this area although it enables EC models to extend their operation to unobserved event types. To fill in this gap, in this work, we investigate event classification under the few-shot learning setting. We propose a novel training method for this problem that extensively exploit the support set during the training process of a few-shot learning model.
Relation Extraction (RE) is one of the fundamental tasks in Information Extraction and Natural Language Processing. Dependency trees have been shown to be a very useful source of information for this task. The current deep learning models for relation extraction have mainly exploited this dependency information by guiding their computation along the structures of the dependency trees. One potential problem with this approach is it might prevent the models from capturing important context information beyond syntactic structures and cause the poor cross-domain generalization.