Structured output prediction with posterior constraints in NLP

Date and time: 
Friday, October 28, 2016 - 10:00
200 Deschutes
Javid Ebrahimi
University of Oregon
  • Dejing Dou (chair)
  • Daniel Lowd
  • Reza Rejaie


Linguistic and structural constraints are ubiquitous in NLP. These constraints were originally applied in semi-supervised settings through incorporating dictionaries and were aimed to constrain Expectation Maximization. Constrained models have also been explored in supervised settings through MAP inference via integer linear programming and loopy belief propagation. Constraints can be classified into local and non-local categories. Local constraints are applied on single features, either directly by penalizing the score of an instance for violating a constraint, or indirectly by adding a regularizing term to the objective function, which penalizes the violation in expectation over all instances. Non-local constraints can either relate structured outputs in a single instance, or relate outputs of multiple instances in the dataset. The non-local inter-instance constraints are often more difficult to handle and approximation in learning/inference is required. This survey covers these constraints in both supervised and semi-supervised settings in the context of structured output prediction tasks, though everything discussed can be applied to classification problems as well.