Bodhisattwa Prasad Majumder
bodhisattwam[at]allenai.org
Office @ Emerald Landing 110
Allen Institute for AI

I am a Research Scientist at Allen Institute for AI. I work on the Aristo project. My research focuses on Interactive Agents, Machine Reasoning, Multi-agent Systems, User-centric ML, and Social Science.

I received my Ph.D. in Computer Science from UC San Diego, advised by Julian McAuley. I was fortunate to receive UCSD CSE Doctoral Award for Excellence in Research (2022), Adobe Research Fellowship (2022), Qualcomm Innovation Fellowship (2020), and led UCSD (Team Bernard) in Amazon Alexa Prize, 2019.


CV  |  Google Scholar




Github  |  LinkedIn





Previously, I spent wonderful summers at Allen Institute for AI, Facebook AI Research, Microsoft Research, and Google AI Research. I worked as a technical editor for The Batch, a weekly newsletter by deeplearning.ai. I also co-authored a best-selling book on NLP with O'Reilly Media.

Research·Awards·Book·Talks·Experiences·Education

Research Summary

Machine understanding often suffers from the lack of reasoning with world knowledge, making them less reliable and potentially risky. My research goal is to develop communicative reasoners that can learn, adapt, and reason by interacting with the world and produce effective, explainable, and equitable outcomes.

  • Interactive and Communicative Agents: Grounded communicative agents for sequential decision making to improve engagingness, trust, and ability to achieve goals;
  • Reasoning: Generating faithful natural language proof and explanations, improving outcomes with interactive feedback, controllable fair and interpretable outcomes;
  • User-centric Machine Learning: Mixed-initiative clarifications to estimate uncertainty, acquiring user-specific knowledge and injecting them ante- and post-hoc in NLP models.

Highlights
(scroll for more)
2024 2023
  • [June] I joined Allen Institute for AI (team Aristo) as a Research Scientist. 🔬
  • [May] I successfully defended my PhD thsis!! 🎓 Access my defense talk here.
  • [May] Paper w/ Oxford collaborators on natural language explanation consistency got in ACL 2023.
  • [April] New work on Self-Refining LLMs w/ collaborators from AI2, CMU, Google, UW, and NVIDIA.
  • [Mar] Proposal on Persona-grounded Dialog Generation received Sony Research Award.
  • [Feb] Served as a Session Chair for Language Generation and Conversational AI tracks at AAAI, 2023.
2022
  • [Nov] Paper w/ Zhouhang, Sammer, and Julian on factual explanations got in AAAI, 2023.
  • [Nov] Our RecSys paper got recognized as Hightlights of ACM Recsys '22.
  • [Nov] Upcoming talk at Harvard University and Harvard Business School. Thanks Hima!
  • [Nov] Upcoming talk at University of Southern California and USC-ISI. Thanks Xiang!
  • [Nov] Upcoming talk at University College London. Thanks Oana!
  • [Oct] Upcoming talk at University of California, Irvine. Thanks Sameer!
  • [Oct] Upcoming talk at University of British Columbia. Thanks Vered!
  • [Jul] Delighted to receive TrustNLP travel grant for NAACL 2022 and volunteer award for ICML 2022!
  • [Jun] Paper w/ Shuyang and Julian on conversational rationale critquing in recsys got in ACM RecSys, 2022.
  • [Jun] Honored to receive the Docotral Award for Excellence in Research from CSE, UC San Diego, 2022!
  • [May] Traveling to Dublin at ACL 2022 for an oral presentation of our dialog paper. Come, say hi!
  • [May] Paper w/ Oana, Thomas, and Julian on natural language explanations got in ICML, 2022.
  • [Mar] Glad to be featured in CSE news by UC San Diego, for my Adobe Research Fellowship!
  • [Feb] Paper w/ Harsh, Taylor, and Julian on unsupervised knowledge injection in dialog got in ACL (main), 2022.
  • [Jan] Excited to join as a techincal editor for The Batch, a weekly newsletter by deeplearning.ai.
  • [Jan] I am fortunate to be named as an Adobe Research Fellow 2022!
  • [Jan] Joining Allen Institute for AI (AI2) in Summer 2022 to work with Peter Clark on interactive explanations.
2021
  • [Nov] Delighted to co-organize two workshops at ACL, 2022! Follow the spaces for more:
    Representation Learning for NLP and Commonsense Reasoning and Representation.
  • [Nov] Invited talk (tweet) at AI2 on Producing Explanations with Commonsense and Interactions.
  • [Aug] Paper w/ Zexue and Julian on debiasing sensitive texts got accepted in Findings of EMNLP, 2021.
  • [Aug] Proposed my thesis on Language Generation with Interactions, Explanations, and Commonsense.
  • [Jul] Invited talk on Explainable Language Generation with Commonsense at Facebook AI Research.
  • [Jul] Invited talk on Explaining ML models with Commonsense at Oxford ML group, University of Oxford.
  • [Jun] New work on Language Explanations for ML models with Commonsense w/ Oana, Thomas, and Julian.
  • [May] Paper Rezero on making deeper networks faster got accepted in Uncertainty in AI (UAI), 2021
  • [May] Honored to receive Friends of the International Center Fellowship, 2021 from UC San Diego.
  • [May] Paper w/ Harsh, Taylor, and Julian on enriching dialog with background stories got accepted in ACL, 2021.
  • [Mar] Work at Microsoft Research got accepted as a long paper in NAACL, 2021 w/ Sudha, Michel, and Julian.
  • [Mar] Invited talks on Grounding Language Generation with World Knowledge at Microsoft Research, IIT Kharagpur.
  • [Mar] Invited talks on Clarification Question Generation with Global Knowledge at Microsoft Research, UC San Diego.
  • [Feb] Excited to be featured by Jacobs School of Engineering, UC San Diego, for our QIF Fellowship!
  • [Jan] Launched GEM Benchmark (shared task in ACL, 2021) for evaluation in Natural Language Generation tasks!
  • [Jan] We are organizing SoCal ML & NLP symposium 2021 virtually! Please consider submitting by Feb 16, 2021.
  • [Jan] Joining Facebook AI Research for Summer 2021 to work with Y-Lan Boureau on Language Generation.
2020
  • [Oct] Invited talk on Achieving Commonsense in Text Generation at NC State. See slides here.
  • [Sep] Two long papers (#1, #2) w/ Harsh, Taylor, Shuyang, Jianmo, and Julian got accepted in EMNLP (main), 2020.
  • [Aug] Received Qualcomm Innovation Fellowship 2020 for our proposal on Conversational Recommender Systems.
  • [Jul] Our book Practical Natural Language Processing has become #1 best seller in Amazon! Know more here.
  • [Jun] Excited that my internship work at Google got featured in the Google AI blog! Check out for more.
  • [April] Work at Google AI got accepted in ACL, 2020 as a long paper w/ Navneet, Sandeep, James, Qi and Marc.
  • [Mar] New work on making deeper networks faster (ReZero) w/ Thomas, Henry, Gary and Julian.
  • [Feb] Organizing SoCal Machine Learning Symposium, 2020 w/ Julian, Jingbo and Hao at UC San Diego.
  • [Jan] Invited talk on Personalized NLG in the AI/ML track at CSE Research Open House, UC San Diego.
2019 2018
  • [Sept] Joined the NLP group at CSE, UC San Diego in Fall 2018.
  • [Jul] Paper w/ Amrith Krishna, Rajesh Bhat and Pawan Goyal got published in CoNLL, 2018.

Publications

The complete list of publications can be seen from my Google Scholar page.
(* denotes equal contribution)

Data-driven Discovery with Large Generative Models
Bodhisattwa P. Majumder*, Harshit Surana*, Dhruv Agarwal, Sanchaita Hazra, Ashish Sabharwal, Peter Clark
arXiv, 2024
pdf

A practical first step toward an end-to-end automation for scientific discovery. We posit that Large Generative Models (LGMs) present an incredible potential for automating hypothesis discovery, however, LGMs alone are not enough.

Skill Set Optimization: Reinforcing Language Model Behavior via Transferable Skills
Kolby Nottingham, Bodhisattwa P. Majumder, Bhavana Dalvi Mishra, Sameer Singh, Peter Clark, Roy Fox
arXiv, 2024
pdf | website

Skill Set Optimization improves LLM actors through constructing and refining sets of transferable skills. Leveraging environment reward signals, generalizable skills enable significant continual improvement for frozen LLM actors.

To Tell The Truth: Language of Deception and Language Models
Sanchaita Hazra, Bodhisattwa P. Majumder
North American Chapter of the Association for Computational Linguistics (NAACL), 2024
pdf

We show there exists algorithmic predictors that can detect novel but accurate language cues in many cases where humans failed to detect deception, opening up the possibility of human-AI collaboration in ameliorating human's ability to detect lies.

CLIN: A Continually Learning Language Agent for Rapid Task Adaptation and Generalization
Bodhisattwa P. Majumder, Bhavana Dalvi Mishra, Peter Jansen, Oyvind Tafjord, Niket Tandon, Li Zhang, Chris Callison-Burch, Peter Clark
arXiv, 2023
pdf | website

A novel non-parametric continual learning paradigm for rapid adaptation and generalization to unseen tasks and environments for language agents. We show a dynamic, persistent, semantic memory centered around causal abstractions significantly amplifies transfer and learning without any additional parameter update.

InterFair: Debiasing with Natural Language Feedback for Fair Interpretable Predictions
Bodhisattwa P. Majumder, Zexue He, Julian McAuley
Empirical Methods in Natural Language Processing (EMNLP), 2023
Oral presentation
pdf | code

Fairness in debiasing (i.e., balancing task performance and bias mitigation) is subjective and difficult to learn from data. In an interactive setup, we enable users to provide feedback and achieve a better balance, supported by controllable explanations.

Aligning Language Models to User Opinions
EunJeong Hwang, Bodhisattwa P. Majumder, Niket Tandon
Findings of Empirical Methods in Natural Language Processing (EMNLP), 2023
pdf | code

We discover that, in addition to the typical approach of prompting LLMs with demographics and ideology for personalization, utilizing the most relevant past opinions from individual users enables the model to predict user opinions more accurately.

Self-Refine: Iterative Refinement with Self-Feedback
Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon, Nouha Dziri, Shrimai Prabhumoye, Yiming Yang, Sean Welleck, Bodhisattwa P. Majumder, Shashank Gupta, Amir Yazdanbakhsh, Peter Clark
Conference on Neural Information Processing Systems (NeurIPS), 2023
pdf | website

Empirical evidence on a broad array of tasks incites promising research direction: LLMs can auto-heal for better outcomes without any supervised training, or RL, or human feedback.

Large Language Models as Zero-shot Conversational Recommenders
Zhankui He*, Zhouhang Xie*, Rahul Jha, Harald Steck, Dawen Liang, Yesu Feng, Bodhisattwa P. Majumder, Nathan Kallus, Julian McAuley
The Conference on Information and Knowledge Management, (CIKM), 2023
pdf | code & datasets

The largest-scale data (till date) for conversational recommendation systems, discovering new trends for context sensitivity when LLMs are used as recommenders.

Adversarially Detecting and Remedying Inconsistencies in Natural Language Explanations
Myeongjun Jang, Bodhisattwa P. Majumder, Julian McAuley, Thomas Lukasiewicz, Oana-Maria Camburu
Association for Computational Linguistics, Main (ACL), 2023
pdf | code

An adversarial framework shows even high-quality natural language explanation do not have necessarily low-level of inconsistencies. A remedy method is proposed that shows additional knowledge-grounding improves robustness.

Towards Factual and Informative Review Generation for Explainable Recommendation
Zhouhang Xie, Sameer Singh, Julian McAuley, Bodhisattwa P. Majumder
AAAI Conference on Artificial Intelligence (AAAI), 2023
pdf | code

A personalized self-rationalizing retrieve-generate framework for factually grounded reviews to explain rating and recommendation predictions with high attribution towards past reviews and informaitve keywords.

On Faithfulness and Coherence of Language Explanations for Recommendation Systems
Zhouhang, Julian McAuley, Bodhisattwa P. Majumder
arXiv, 2022
pdf | code

Concerning trend emerges from review-generation models showing generated reviews are semantically incoherent with the ground-truth as well as exhibit very low degree of faithfulness to the recommendation prediction.

Controlling Bias Exposure for Fair Interpretable Predictions
Zexue He, Yu Wang, Julian McAuley, Bodhisattwa P. Majumder
Findings of Empirical Methods in Natural Language Processing (EMNLP), 2022
pdf | code

Current debiasing models may over-debias. With local explanations and interventional training, we establish the fair balance between debiasing and predictability for several classification and generation tasks.

Self-Supervised Bot Play for Transcript-Free Conversational Recommendation with Rationales
Shuyang Li, Bodhisattwa P. Majumder, Julian McAuley
ACM Conference on Recommendation Systems (RecSys), 2022
Highlights of ACM RecSys' 22; invited for ACM Transactions on Recommendation Systems
pdf | code | slides

A conversational critiquing framework to provide feedback on rationales behind a recommendation and iteratively update underlying recommendation model for faster convergence to target predictions.

Knowledge-grounded Self-rationalization via Extractive and Natural Language Explanations
Bodhisattwa P. Majumder, Oana-Maria Camburu, Thomas Lukasiewicz, Julian McAuley
International Conference on Machine Learning (ICML), 2022
Spotlight presentation
pdf | code | talk

A unified framework to map extractive rationales and abstractive natural language explanations (NLE) of ML Models using commonsense. We establish new state-of-the-art in NLE generation, rationale extraction and predictive task performance.

Achieving Conversational Goals with Unsupervised Post-hoc Knowledge Injection
Bodhisattwa P. Majumder, Harsh Jhamtani, Taylor Berg-Kirkpatrick, Julian McAuley
Association for Computational Linguistics, Main (ACL), 2022
Oral presentation
pdf | code | talk

A post-hoc knowledge-injection technique that first retrieves and selects a diverse set of relevant knowledge snippets and further inject them into an initial response from an exisiting dialog model. Enriching dialog responses at decoding time with external knowledge (without re-training the existing models) promotes achieving conversational goals.

Detect and Perturb: Neutral Rewriting of Biased and Sensitive Text via Gradient-based Decoding
Zexue He, Bodhisattwa P. Majumder, Julian McAuley
Findings of Empirical Methods in Natural Language Processing (EMNLP), 2021
pdf | code

A rewriting framework that first detects sensitive components from input text and then perturbs the generation model at decoding time under a neutralizing constraint. No parallel corpus of sensitive-neutral texts is needed for training.

ReZero is All You Need: Fast Convergence at Large Depth
Thomas Bachlechner*, Bodhisattwa P. Majumder*, Henry Mao*, Gary Cottrell, Julian McAuley
Uncertainty in Artificial Intelligence (UAI), 2021
Oral presentation
pdf | code | slides

A novel deep neural network architecture that initializes an arbitrary layer as the identity map (ReZero), using a single additional learned parameter per layer to facilitate very deep signal propagation.

Unsupervised Enrichment of Persona-grounded Dialog with Background Stories
Bodhisattwa P. Majumder, Taylor Berg-Kirkpatrick, Julian McAuley, Harsh Jhamtani
Oral presentation
Association for Computational Linguistics, Main (ACL), 2021
pdf | code | slides

An unsupervised gradient-based rewriting framework to adapt potential background stories to an existing persona-grounded dialog. We constrain the generation for self-consistency with persona and promote its adherence to the story.

The GEM Benchmark: Natural Language Generation, its Evaluation and Metrics
Bodhisattwa P. Majumder as a part of GEM team
GEM workshop, Association for Computational Linguistics (ACL), 2021
pdf | website

GEM is a community-driven effort with the goal to improve how progress in natural language generation is measured. As a shared task in ACL 2021, we invite for challenge set submissions for 11 datasets and 7 languages in various NLG challenges.

Ask what's missing and what's useful: Improving Clarification Question Generation using Global Knowledge
Bodhisattwa P. Majumder, Sudha Rao, Michel Galley, Julian McAuley
North American Chapter of the Association for Computational Linguistics (NAACL), 2021
Oral presentation
pdf | code | talk

A two-stage framework that 1) estimates missing information from the global knowledge of similar contexts, and 2) conditionally generates useful questions using gradient-based decoding based on a usefulness scorer at the inference time. This work was done during an internship at Microsoft Research.

Like hiking? You probably enjoy nature: Persona-grounded Dialog with Commonsense Expansions
Bodhisattwa P. Majumder, Harsh Jhamtani, Taylor Berg-Kirkpatrick, Julian McAuley
Empirical Methods in Natural Language Processing (EMNLP), 2020
Oral presentation
pdf | code | slides

A variational learning framework to capture commonsense implications of input persona in a persona-grounded dialog agent using richer expansions obtained from existing commonsense knowledge bases.

Interview: Large-scale Modeling of Media Dialog with Discourse Patterns and Knowledge Grounding
Bodhisattwa P. Majumder*, Shuyang Li*, Jianmo Ni, Julian McAuley
Empirical Methods in Natural Language Processing (EMNLP), 2020
Oral presentation
pdf | code | data

The first large-scale analysis of discourse in media dialog ("Interview" - 105K conversations) and its impact on generative modeling of dialog turns, with a focus on interrogative patterns and use of external knowledge.

Bernard: A Stateful Neural Open-domain Socialbot
Bodhisattwa P. Majumder, Shuyang Li, Jianmo Ni, Henry Mao, Sophia Sun, Julian McAuley
Proceedings of Alexa Prize, Amazon, 2019-20
pdf

A framework for an engaging open-domain socialbot with a stateful autonomous dialog manager using non-deterministic finite automata to control multi-turn conversations. This work was done for Alexa Prize 2019.

Representation Learning for Information Extraction from Form-like Documents
Bodhisattwa P. Majumder, Navneet Potti, Sandeep Tata, James Wendt, Qi Zhao, Marc Najork
Association for Computational Linguistics (ACL), 2020
Oral presentation
pdf | blog | slides

A novel approach to learn interpretable representations for target fields using spatial and contextual knowledge for extracting structured information from form-like document images, even with unseen templates. This work was done at Google AI as a part of 2019 summer internship.

Generating Personalized Recipes from Historical User Preferences
Bodhisattwa P. Majumder*, Shuyang Li*, Jianmo Ni, Julian McAuley
Empirical Methods in Natural Language Processing (EMNLP), 2019
pdf | code | data | poster

Media coverage: Science Node, UCSD CSE News, UCSD JSOE News

A new task of personalized recipe generation to help these users: expanding a name and incomplete ingredient details into complete natural-text instructions aligned with the user's historical preferences.

Improving Neural Story Generation by Targeted Common Sense Grounding
Henry Mao, Bodhisattwa P. Majumder, Julian McAuley, Gary Cottrell
Empirical Methods in Natural Language Processing (EMNLP), 2019
pdf | code

A multi-task learning scheme to achieve quantitatively better common sense reasoning in language models by leveraging auxiliary training signals from datasets designed to provide common sense grounding.

Upcycle Your OCR: Reusing OCRs for Post-OCR Text Correction in Romanised Sanskrit
Amrith Krishna, Bodhisattwa P. Majumder, Rajesh S. Bhat, Pawan Goyal
Conference on Computational Natural Language Learning (CoNLL), 2018
pdf | code+data | supplementary

A state-of-the-art approach towards post-OCR text correction for digitising texts in Romanised Sanskrit. This work was done in a collaboration with CNeRG.

An 'Eklavya' approach to learning Context Free Grammar rules for Sanskrit using Adaptor Grammar
Amrith Krishna, Bodhisattwa P. Majumder, Anil K. Boga, Pawan Goyal
World Sanskrit Conference, 2018
pdf

A non-parametric Bayesian approach for learning (Probabilistic) Context Free Grammar productions for Sanskrit language at word-level supervised tasks such as compound type identification, identification of source and derived words from the corpora for derivational nouns and sentence-level structured prediction. This work was done at CNeRG.

Deep Recurrent Neural Networks for Product Attribute Extraction in eCommerce
Bodhisattwa P. Majumder*, Aditya Subramanian*, Abhinandan Krishnan, Shreyansh Gandhi, Ajinkya More
Preprint, arXiv, 2017
pdf | system description | video

We demonstrate the potential of neural recurrent structures in product attribute extraction by improving overall F1 scores, as compared to the previous benchmarks. This has made Walmart e-commerce achieve a significant coverage of important facets or attributes of products. This work from Walmart Labs later followed by a US patent.

Distributed Semantic Representations of Retail Products based on Large-scale Transaction Logs
Bodhisattwa P. Majumder*, Sumanth S Prabhu*, Julian McAuley
2018
report

We processed 18 million transactions consisting of unique 325,548 products from 1,551 categories to obtain vector representations which preserve product analogy. These representations were effective in identifying substitutes and complements. This work was done at Walmart Labs.

Lolcats meet Philosoraptors - What's in a 'meme'? Understanding the Dynamics of Image Macros in Social Media
Bodhisattwa P. Majumder, Amrith Krishna, Unni Krishnan, Anil K. Boga, Animesh Mukherjee
Preprint, arXiv, 2018
pdf | slides

How similar are the dynamics of meme based communities to that of text based communities? We try to explain the community dynamics by categorising each day based on temporal variations in the user engagement. Work done at CNeRG.

Patents
  • A System for Information Extraction From Form-Like Documents, Google, 2020
  • REDCLAN - RElative Density based CLustering and Anomaly Detection, Wal-mart, 2018
  • Automated Extraction of Product Attributes from Images, Wal-mart, 2018
  • System and Method for Product Attribute Extraction Using a Deep Recurrent System, Wal-mart, 2017
  • Analytical Determination of Competitive Interrelationship between Item Pairs, Wal-mart, 2017
Awards
  • [2022] Work recognized as Highlights of ACM RecSys' 22; invited for ACM Transactions on Recommendation Systems
  • [2022] Receipient of TrustNLP Travel Grant award for NAACL 2022
  • [2022] Receipient of UCSD CSE Doctoral Award for Excellence in Research, 2022
  • [2022] Receipient of Adobe Research Fellowship, 2022
  • [2021] Receipient of Friends of the International Center Fellowship, UC San Diego
  • [2020] Receipient of Qualcomm Innovation Fellowship, 2020 from North America
  • [2019] Intern Spotlight in Google-wide Engineering Newsletter for summer internship project with the Juicer Team
  • [2019] Awarded $250,000 for leading UC San Diego (Team Bernard) in the finals of Alexa Prize 2019
  • [2018] Department Fellowship, 1st-year of PhD, Dept. of CSE, UC San Diego
  • [2017] Gold medal and Endowment for the highest academic performance (Rank-1) in Masters, IIT Kharagpur
  • [2016] Finalist, Data Science Game '16, Paris; Represented India (1 out of 3 teams), International Rank 14
  • [2015] Scholarship for academic excellence (obtaining CGPA > 9.5), Indian Statistical Institute
  • [2011] 4-year scholarship for academic excellence, Ministry of Human Resource & Development, India
Book: Practical Natural Language Processing by O'Reilly
PontTuset

Practical Natural Language Processing
O'Reilly Media, 2020
Sowmya Vajjala, Bodhisattwa P. Majumder, Anuj Gupta, Harshit Surana
amazon | safari online | website

Practical Natural Language Processing distills our collective wisdom on building real world applications such as data collection, working with noisy data and signals, incremental development of solutions, and issues involved in deploying the solutions as a part of a larger application - bridging a gap between current textbooks and online offerings.

Highlights:

  • Endorsed by Zach Lipton, Sebastian Ruder, Marc Najork et al.
  • #1 Best seller in Amazon.com in Data Mining category
  • #1 New release in Amazon.com in Natural Language Processing category
  • Read and adapted by 20+ AI companies and 6 academic courses internationally

Talks

Continual Learning with Language Agents | slides

    [2024] at Iinteractive Fiction course, University of Pennsylvania

User-centic Natural Language Processing | video

    [2023] PhD Defense, CSE, UC San Diego

Effective, Explainable, and Equitable NLP with Knowledge and Interactions | slides

  • [2022] at Stanford University
  • [2022] at Allen Institute for AI
  • [2022] at University of Southern California/USC-ISI
  • [2022] at Harvard University/Harvard Business School
  • [2022] at University College London
  • [2022] at UC Irvine
  • [2022] at University of British Columbia
  • [2022] at UC San Diego

Producing Explanations with Commonsense and Interactions | slides

  • [2022] at AI Research Seminar, UC San Diego
  • [2021] at Allen Institute for AI

Explainable Language Generation with Commonsense | slides

  • [2021] at Facebook AI Research
  • [2021] at Machine Learning Group, Oxford University

Grounding Language Generation with World Knowledge | slides

  • [2021] at Microsoft Research, India
  • [2021] at IIT Kharapgur
  • [2020] at NC State, AI Club
  • [2020] at INFORMS 2020, Mining and Learning on Graphs session, Washington, DC

Clarification Question Generation using Global Knowledge | slides

  • [2021] at Microsoft Research, Redmond
  • [2021] at AI Research Seminar, UC San Diego

Personalization, NLP and others

  • [2020] at UC San Diego, CSE Research Open House, on Personalization in Natural Language Generation
  • [2018] at Indian Inst of Management Calcutta, Industry Conclave & Graduate Orientation, on NLP - a primer
  • [2017] at Walmart Labs, on Information Extraction from Images - Application in e-Commerce
  • [2017] at Indian Statistical Institute, on Deep Neural Network: in light of Optimization and Regularization

Experiences
PontTuset

The Allen Institute for Artifical Intelligence (AI2), Seattle
2023 - Present
Research Scientist at Team Aristo.
Prev: Research Intern with Peter Clark, Bhavana Dalvi, and Oyvind Tafjord.

Devloping communicative and interactive language agents.

PontTuset

Facebook AI Research, New York
Summer, 2021
Research Intern with Y-Lan Boureau and Asli Celikyilmaz at NLP and Conv AI team

Devloping personalized, commonsensical, and empathetic dialog systems.

PontTuset

Microsoft Research, Redmond
Summer, 2020
Research Intern with Sudha Rao and Michel Galley at Natural Language Processing Group.

Developed a novel framework that estimates missing 'local' information from the knowledge of a closed-world to generate a useful clarification question. Our work got accepted as a long paper in NAACL '21.

PontTuset

Amazon Alexa Prize
2019-2020
Team Leader of Bernard, UC San Diego
Media Coverage: cnet

Building free-form social conversational agent as a finalist in the Amazon Alexa Prize Challenge 2019-20 along with 9 other finalist universities. We were awarded $250,000 to research on dialog systems.

PontTuset

Google AI, Mountain View
Summer, 2019
Research Intern with Sandeep Tata and Navneet Potti from Team Juicer.
Media Coverage: Google AI blog, Google Engineering Newsletter (Intern Spotlight)

Developed an Information Extraction Framework for form-like documents using representation learning. The work was published as an Intern spotlight article in the Google-wide Newsletter and is being integrated with Google Cloud's Document AI. Our work got accepted as a long paper in ACL '20.

PontTuset

Walmart Labs
2017-2018
Research Engineer

Developed a neural multimodal attribute tagging framework to improve faceted product using both product description and product images. The work produced 2 US patents and one technical report published in arXiv. Other works on user modeling and product embeddings also have been patented.

Services
Interns & Mentees
  • Kolby Nottingham, PhD CSE @ UCI (Intern, AI2)
  • Zexue He, PhD CSE @ UCSD
  • Zhouhang Xie, MS CSE @ UCSD
  • Manish Borthakur, Math @ IIT Delhi
  • Shivam Lakhotia, MS, CSE @ UCSD
  • Kunal Jain, MS, CSE @ UCSD
  • Astuti Sharma, MS, CSE @ UCSD
  • Maximilian Halvax and Tatum Maston, Undergraduate, HDSI @ UCSD, as a part of HDSI scholar program
External Collaborators
Education
PontTuset

PhD, Computer Science and Engineering
University of California, San Diego
2018-2023

Thesis: User-centric Natural Language Processing
PhD Committee: Julian McAuley (Chair), UC San Diego, Taylor Berg-Kirkpatrick, UC San Diego, Gary Cottrell, UC San Diego, Lawrance Saul, UC San Diego, Sameer Singh, UC Irvine, Arya Mazumder, UC San Diego.

PontTuset

MS, Computer Science and Engineering
University of California, San Diego
2018-2020

CGPA: 4.0; Courses: Intro to NLP, Data Mining, Program Synthesis, Deep Learning for Sequences, Probabilistic Reasoning, Intro to Computer Vision, Convex Optimization, Human-centered Programming

PontTuset

MS, Data Science and Machine Learning
Indian Institute of Technology, Kharagpur
2015-2017

Summa cum laude (Gold Medalist); Advised by Prof. Animesh Mukherjee as a part of CNeRG lab. Courses: Algorithms, Intro to ML, Multivariate Analysis, Complex Networks, Information Retrieval


Thanks to Jon Barron for this nice template!
Gorgeous Geisel Library cover art from here.