Our group will work to improve the ability of artificial neural network models to represent and understand sentence meaning. Candidate systems will be evaluated on their ability to solve a suite language understanding tasks using a single shared sentence encoder component, combined with a set of ten lightweight task-specific attention models. We expect to explore new methods for unsupervised learning, new objectives and techniques for multi-task and transfer learning, and new methods for linguistically sophisticated error analysis.
June and July 2018.
Graduate Student Researchers
Undergraduate Student Researchers
Advisors and Consulting Participants