I am mainly interested in the following problems in natural language understanding and generation.
In NLU, my research has focused on using memory efficient ways to do deep semantic understanding over long inputs of text (long documents or collections of documents).
In NLG, I have focused on controlling for properties of text (e.g. writing style) during generation. I draw ideas from psycholinguistics by looking at how human memory works, and from linguistics models of information organization.
My colleagues and I have investigated the benefits of incorporating 'human-like' retention in challenging generation areas such as style-transfer, summarization of highly technical texts, and text simplification.
Another area I am highly interested in is human-in-the-loop systems, specifically how
we can develop interactive systems that can boost human comprehension and production (a.k.a. generation) of complex text.
Previously, I worked in developing universal language tools, i.e. tools that would require little to no language-specific tuning. These tools were tested for Quechua and Shipibo-Konibo, native languages of Peru, and have been used ever since to facilitate annotation of linguistic resources in these languages.