Life has been evolving on Earth for about four billion years. Evolutionary biologists are concerned with the underlying processes (mutation, drift, and natural selection) that give rise to biological diversity. It can be challenging to directly observe these processes, given the relative brevity of a human lifetime. Although natural historians traditionally used fossils to infer evolutionary history, the fossil record is incomplete and not easily searchable.
Today's Internet has several weaknesses. For example, one of the main weaknesses is the lack of security: distributed denial of service (DDoS) attacks are plaguing the Internet and spam comprises a large portion of email messages. In addition, the entanglement of identity and location of end hosts hinder host mobility. Rigidity of network layer is another problem. The network layer in today's Internet cannot evolve because any architectural change requires a global ISP agreement. Furthermore, today's Internet lacks management functions that can diagnose faults and performance problems.
Compilers are central to software development; they isolate the programmer from the details of the physical hardware while still producing efficient, optimized executables. To producing a valid executable, the compiler must generate valid instruction schedules and register assignments. Together, scheduling and assignment comprise an instance of the resource constrained scheduling problem. The general form of this problem is well studied with many successful, search-based solutions.
Humans routinely perform multiple tasks simultaneously. Understanding the capabilities and limits of human multitasking not only helps design efficient devices to enhance human performance, but also helps uncover the nature of human cognition. In this talk, I will review the results of past research on multitasking performance, and stress the increasingly important role that cognitive modeling has played in this research endeavor. I will introduce the multiple resource theory, and discuss how the two major cognitive architectures ACT-R and EPIC implement this theory computationally.
In parallel with the growth of the Internet's infrastructure, peer-to-peer class of applications emerged, in which participating users (peers) connect together and form an overlay network to assist each other towards a common goal. P2P concept brings about a self-scalable, low-cost, user-centric structure for a variety of applications, mainly content distribution. P2P applications quickly became popular enough to be responsible for up to 70\% of the Internet traffic according to some reports.
Capturing an accurate view of the Internet topology is of great interest to the networking research community as it has many uses ranging from the design and evaluation of new protocols to the vulnerability analysis of Internet infrastructure. The scale of the Internet topology coupled with its distributed and heterogeneous nature makes it very challenging to capture a complete and accurate snapshot of the topology.
Knowledge is a valuable but scarce resource today. A serious problem in knowledge acquisition and management is that the knowledge in the same domain often appears in different representations in various applications. We propose the framework of knowledge translation (KT) and knowledge integration (KI) that assimilates the knowledge produced by different sources and used for different tasks. Our goal can be achieved by exploiting the mapping among the representations of multiple knowledge bases.
The correspondence between proving theorems and writing programs, known as the Curry-Howard isomorphism or the proofs-as-programs paradigm, has spurred the development of new tools like proof assistants, new fields like certified programming, and new advanced type features like dependent types. The isomorphism gives us a correspondence between Gentzen's natural deduction, a system for formalizing common mathematical reasoning, and Church's lambda-calculus, one of the first models of computation and the foundation for functional programming languages.
In general, research related to text analysis assumes that the information contained in text form, although ambiguous, is correct with respect to the domain to which that text belongs to. This assumption comes in part from the fact that text analysis has historically been done over scientific documents. As the trend of taking text understanding to broader domains, such as Internet, we need to consider the presence of incorrect text in our data set. By incorrect text, we refer to a natural language text statement which is either false or contradicts the knowledge of the domain.
Structured learning is the problem of finding a predictive model for mapping the input data into complex outputs that have some internal structure. Structured output prediction is a challenging task by itself, but the problem becomes even more difficult when the input data is adversarially manipulated to deceive the predictive model. The problem of adversarial structured output prediction is relatively new in the field of machine learning. Many real world applications can be abstracted as an adversarial structured output prediction problem.