Tuesday, December 21, 2010

ACL HLT workshop


   Comparable Corpora and the Web

   Co-located with ACL-HLT 2011
   Portland, Oregon
   24 June 2011

   DEADLINE FOR PAPERS: 1st April 2011
   Submission: https://www.softconf.com/acl2011/comparable/


Btw, here is a definition for comparable corpora from the CFP : "documents in one or several languages that are comparable in content and form in various degrees and dimensions".

Thursday, December 9, 2010

Paper notes - Dual Decomposition for Parsing with Non-Projective Head Automata

Koo, Rush, Collins, Jaakkola, Sontag. EMNLP 2010.

Problem: Non-projective head automata (I dont know this) parsing - building a parse tree for a sentence, using annotated data (sentences and their parse trees).

Contribution: A dual decomposition formulation of the problem. Leads to efficient algorithms based on minimum spanning tree (MST) algorithms and dynamic programming.

TODO : Got stuck in understanding the basics of the optimization and algorithms involved. Will get back to this later.

Some of the things to look up -
  • Head automata; parsing
  • Dual decomposition, Lagrangian relaxation - in combinatorial optimization
  • LP/ILP solvers
  • MST based optimization
  • NP-hard/complete - practice judging hardness of problems

Paper notes - Beyong NomBank: A Study of Implicit Arguments for Nominal Predicates

Gerber and Chai. ACL 2010

Problem: (Unclear to me) Identify arguments for the predicates in a sentence, even when they are implicit (having been mentioned earlier, in preceding sentences).

  • They have introduced a new problem. Earlier only explicit arguments were considered and their recognition was measured.
  • They have annotated data for the new problem and made it available as a resource for the community. Also, they established a baseline for future work.
Learning points
  • [IMP] Start introduction with a motivating example, that immediately elucidates the problem cleanly, and clarifies differences with existing problems/methods.
  • My ShoBha work also introduces a new problem; this might be a good model paper to emulate for my draft.
  • Introduction is short, crisp and has 3 parts - motivating example, "what is the problem" and "what is our contribution".
  • A whole section devoted to the manual efforts of annotation and resource building. We should do this for our work on the Wikipedia data.
  • Insightful comments and statistics about the data (especially those that are pertinent to the problem on hand).
  • Wherever annotation as used, Cohen's kappa coefficient (Cohen, 1960) was mentioned.
  • A careful design of features and detailed analysis using - (1) Floating forward feature selection (2) Grid search (3) Feature classes
  • Some packages used - (1) LibLinear's logistic regression solver (for analyzing feature classes) (2) OpenNLP coreference identification
  • Choose a reasonable baseline (possible based on some heuristic)
  • Look at old papers for the evaluation methodology that is prevalent for this/similar problem/s.
  • Dice coefficient for measuring performance. All results were reported with statistical significance, using two-tailed bootstrap method (Efron and Tibshirani, 1993).
  • Ablation Study (in the 'Discussion' section) w.r.t. the features.
  • Error analysis - an example when it failed, and why did it fail.
  • Success analysis - an example when it worked, and why.

Paper notes - Reinforcement Learning for mapping instructions to actions

Branavan, Chen, Luke, Barzilay. ACL 2009

Problem: Given a set of text instructions, choose a sequence of actions that carries out the instructions. For example, "Click Start, point to search, and then click for files and folders. In the search results dialog box, on the tools menu, click folder options..."

Contribution: Problem already attacked in supervised setting. They introduce reinforcement learning setting (assume it can be known whether the actions were successful or not).

  • problem old, reinforcement setting old, algorithm (policy gradient) old, results worse than supervised case.
  • Neat and comprehensive representation of instruction, action, environment (where actions are taken).
  • Simple log-linear model for distributions - p(x) = \exp{\theta \cdot \phi(x)}/Z. (Estimate \theta.)
  • Lot of work in designing features (4400 feature for one application, and 8000+ for the other).
  • Lot of work in designing reward function for reinforcement.
  • Policy gradient (Sutton et al., 2000) algorithm for learning; niceties there too.
  • Testing the fundamental hypothesis - Analysis of real impact of text on the task (checking if it is work due to information other than the text, by measuring certain stats and by cleverly removing text info).
  • Baseline analysis - (a) Show that naive baselines do bad (so non-trivial task) (b) Show that supervised baseline is not great ('hard' task) (c) Finer analysis - cause of hardness (d) Mixing methods to measure partial impact of each method (a different kind of 'ablation' study)
  • Always report statistical significance (sign test)