Question Answering (QA) requires understanding of natural language queries along with information content to obtain an answer to the question. In this survey, we focus on the question answering methods specifically based on the neural network frameworks which are the state-of-art for many QA datasets. The crux of a neural network model lies in the representation of both question and answer along with auxiliary knowledge as a continuous real valued representation, called vectors or embeddings.
Scientific visualization for exascale computing is very likely to require in situ processing. Traditional simulation checkpointing and post hoc visualization will likely be unsustainable on future systems at current trends due to the growing gap between I/O bandwidth and FLOPS. As a result, the majority of simulation data may be lost if in situ visualization techniques are not deployed.
The classic paradigm for scientific visualization and data analysis is post-hoc, where simulation codes write results on the file system and visualization routines read them back to operate. This paradigm sees file I/O as an increasing bottleneck, in terms of both transfer rates and storage capacity. Worse, the I/O bottleneck also jeopardizes the turnaround times for visualization and data analysis routines, which is a precondition for successful explorations.
Since near the beginning of electronic computing, Monte Carlo neutron transport has been a fundamental approach to solving nuclear physics problems. Over the past few decades Monte Carlo transport applications have seen a significant increase in their capabilities and a reduction in time to solution. Research efforts have been focused on areas such as scalability with distributed memory parallelism, load balance with domain decomposition, and variance reduction techniques. In the last few years, however, the landscape has been changing.
Linear systems are a form of representation for problems in a variety of domains, including but not limited to statistics, thermodynamics, electric circuits, quantum mechanics, nuclear power industry, petroleum industry, robotics, computational fluid dynamics, numerical optimization and PDE-based simulations. Given that linear systems are widespread across different areas of research; providing solutions for them play a critical role for researchers and scientists in these fields. Providing solutions for large sparse linear systems has been a necessity in many fields for quite some time.
Linguistic and structural constraints are ubiquitous in NLP. These constraints were originally applied in semi-supervised settings through incorporating dictionaries and were aimed to constrain Expectation Maximization. Constrained models have also been explored in supervised settings through MAP inference via integer linear programming and loopy belief propagation. Constraints can be classified into local and non-local categories.
The future of computing will be driven by constraints on power consumption. Achieving an exaflop will be limited to no more than 20 MW of power, forcing co-design innovations in both hardware and software to improve overall efficiency. On the hardware side, processor designs are shifting to many-core architectures to increase the ratio of computational power to power consumption. Research and development efforts of other hardware components, such as the memory and inter- connect, further enhance energy efficiency and overall reliability.