I'm interested in a variety of topics in machine learning, NLP, and cognitive science. My research focuses on efficient, interpretable, and controllable ways of reasoning in long-horizon complex tasks. On this path, my colleagues and I investigated case studies on latent reasoning, preference alignment, and agents.
During my PhD, I focused on controlling for properties of text (e.g. writing style) during generation. I drew ideas from psycholinguistics by looking at how human memory works, as well as from linguistics models of information organization.
My collaborators and I explored the benefits of incorporating 'human-like' retention in challenging generation areas such as style-transfer, summarization of highly technical texts, and text simplification.
Previously, I worked in developing universal language tools, i.e. tools that would require little to no language-specific tuning. These tools were tested for Quechua and Shipibo-Konibo, native languages of Peru, and have been used ever since to facilitate annotation of linguistic resources in these languages.