Bodhisattwa Prasad Majumder
bodhisattwa[at]ucsd.edu
Office @ 4146, CSE (EBU3B)
UC San Diego

I am a 3rd-year ML/NLP Ph.D. student in Computer Science at UC San Diego. My advisor is Prof. Julian McAuley from Artifical Intelligence Group. I am interested in Language Generation, Conversational AI, Commonsense Knowledge, and Interpretability to build empathetic, sensible and personalized interactive systems.

I am a Qualcomm Innovation Fellow from North America for 2020-21.
I led UC San Diego (Team Bernard) in the finals of Amazon Alexa Prize, 2019.

So far, I spent wonderful summers at:

[2020] Microsoft Research, NLP Group with Sudha Rao and Michel Galley working on clarifying question generation;
[2019] Google AI Research with Sandeep Tata and Marc Najork working on information extraction from templatic documents.

Previously, I graduated (2017) summa cum laude from IIT Kharagpur, MS in Machine Learning. I wrote a best-selling book on NLP with O'Reilly Media.


CV  |  Google Scholar  |  Github
LinkedIn  |  Twitter

Publications·Experiences·Awards·Education·Book·Talks
Highlights

2021
  • [May] Honored to receive Friends of the International Center Fellowship, 2021 from UC San Diego.
  • [May] Paper w/ Harsh, Taylor, and Julian on enriching dialog with background stories got accepted in ACL, 2021.
  • [Mar] Work at Microsoft Research got accepted as a long paper in NAACL, 2021 w/ Sudha, Michel, and Julian.
  • [Mar] Invited talks on Grounding Language Generation with World Knowledge at Microsoft Research, IIT Kharagpur.
  • [Mar] Invited talks on Clarification Question Generation with Global Knowledge at Microsoft Research, UC San Diego.
  • [Feb] Excited to be featured by Jacobs School of Engineering, UC San Diego, for our QIF Fellowship!
  • [Jan] Launched GEM Benchmark (shared task in ACL, 2021) for evaluation in Natural Language Generation tasks!
  • [Jan] We are organizing SoCal ML & NLP symposium 2021 virtually! Please consider submitting by Feb 16, 2021.
  • [Jan] Joining Facebook AI Research for Summer 2021 to work with Y-Lan Boureau on Language Generation.
2020
  • [Oct] Invited talk on Achieving Commonsense in Text Generation at NC State. See slides here.
  • [Sep] Two long papers (#1, #2) w/ Harsh, Taylor, Shuyang, Jianmo, and Julian got accepted in EMNLP (main), 2020.
  • [Aug] Received Qualcomm Innovation Fellowship 2020 for our proposal on Conversational Recommender Systems.
  • [July] Our book Practical Natural Language Processing has become #1 best seller in Amazon! Know more here.
  • [June] Excited that my internship work at Google got featured in the Google AI blog! Check out for more.
  • [April] Work at Google AI got accepted in ACL, 2020 as a long paper w/ Navneet, Sandeep, James, Qi and Marc.
  • [Mar] New work on making deeper networks faster (ReZero) w/ Thomas, Henry, Gary and Julian.
  • [Feb] Organizing SoCal Machine Learning Symposium, 2020 w/ Julian, Jingbo and Hao at UC San Diego.
  • [Jan] Invited talk on Personalized NLG in the AI/ML track at CSE Research Open House, UC San Diego.
2019 2018
  • [Sept] Joined the NLP group at CSE, UC San Diego in Fall 2018.
  • [July] Paper w/ Amrith Krishna, Rajesh Bhat and Pawan Goyal got published in CoNLL, 2018.

Here in xkcd.

Research

Machine understanding often suffers from the lack of world knowledge (can machines think? by Richard Feynman) that makes these machines risky and/or less reliable to use for practical human assistance. The fundamental challenge for a machine is to draw contextual inferences that are grounded on world knowledge to appear more acceptable and trustworthy to human users.

I particularly focus on natual language generation models and investigate ways to generate texts grounded in world knowledge inferences that are 1) objective (commonsense, facts) or 2) subjective (personalization, empathy). I develop parametric and non-parametric ways of inferential knowledge selection for grounded text generation in both supervised and unsupervised regimes with successful applications in dialog systems, creative text generation and explainable AI.

My previous research on NLP includes information extraction, sequence labeling, sequence generation, and natural language parsers. I also worked on statistical modeling, game theory, and machine learning applications.

My selected publications are listed here. The complete list of publications can be seen from my Google Scholar page.

Selected Publications
(* denotes equal contribution)

Unsupervised Enrichment of Persona-grounded Dialog with Background Stories
Bodhisattwa P. Majumder, Taylor Berg-Kirkpatrick, Julian McAuley, Harsh Jhamtani
Association for Computational Linguistics (ACL), 2021

An unsupervised gradient-based rewriting framework to adapt potential background stories to an existing persona-grounded dialog. We constrain the generation for self-consistency with persona and promote its adherence to the story.

The GEM Benchmark: Natural Language Generation, its Evaluation and Metrics
Bodhisattwa P. Majumder as a part of GEM team
GEM workshop, Association for Computational Linguistics (ACL), 2021
pdf | website

GEM is a community-driven effort with the goal to improve how progress in natural language generation is measured. As a shared task in ACL 2021, we invite for challenge set submissions for 11 datasets and 7 languages in various NLG challenges.

Ask what's missing and what's useful: Improving Clarification Question Generation using Global Knowledge
Bodhisattwa P. Majumder, Sudha Rao, Michel Galley, Julian McAuley
North American Chapter of the Association for Computational Linguistics (NAACL), 2021
pdf | code | slides

A two-stage framework that 1) estimates missing information from the global knowledge of similar contexts, and 2) conditionally generates useful questions using gradient-based decoding based on a usefulness scorer at the inference time. This work was done during an internship at Microsoft Research.

Like hiking? You probably enjoy nature: Persona-grounded Dialog with Commonsense Expansions
Bodhisattwa P. Majumder, Harsh Jhamtani, Taylor Berg-Kirkpatrick, Julian McAuley
Empirical Methods in Natural Language Processing (EMNLP), 2020
pdf | code | slides

A variational learning framework to capture commonsense implications of input persona in a persona-grounded dialog agent using richer expansions obtained from existing commonsense knowledge bases.

Interview: Large-scale Modeling of Media Dialog with Discourse Patterns and Knowledge Grounding
Bodhisattwa P. Majumder*, Shuyang Li*, Jianmo Ni, Julian McAuley
Empirical Methods in Natural Language Processing (EMNLP), 2020
pdf | code | data

The first large-scale analysis of discourse in media dialog ("Interview" - 105K conversations) and its impact on generative modeling of dialog turns, with a focus on interrogative patterns and use of external knowledge.

Bernard: A Stateful Neural Open-domain Socialbot
Bodhisattwa P. Majumder, Shuyang Li, Jianmo Ni, Henry Mao, Sophia Sun, Julian McAuley
Proceedings of Alexa Prize, Amazon, 2019-20
pdf

A framework for an engaging open-domain socialbot with a stateful autonomous dialog manager using non-deterministic finite automata to control multi-turn conversations. This work was done for Alexa Prize 2019.

Representation Learning for Information Extraction from Form-like Documents
Bodhisattwa P. Majumder, Navneet Potti, Sandeep Tata, James Wendt, Qi Zhao, Marc Najork
Association for Computational Linguistics (ACL), 2020
pdf | blog | slides

A novel approach to learn interpretable representations for target fields using spatial and contextual knowledge for extracting structured information from form-like document images, even with unseen templates. This work was done at Google AI as a part of 2019 summer internship.

ReZero is All You Need: Fast Convergence at Large Depth
Thomas Bachlechner*, Bodhisattwa P. Majumder*, Henry Mao*, Gary Cottrell, Julian McAuley
Preprint. Work In Progress. arXiv, 2020
pdf | code

A novel deep neural network architecture that initializes an arbitrary layer as the identity map (ReZero), using a single additional learned parameter per layer to facilitate very deep signal propagation.

Generating Personalized Recipes from Historical User Preferences
Bodhisattwa P. Majumder*, Shuyang Li*, Jianmo Ni, Julian McAuley
Empirical Methods in Natural Language Processing (EMNLP), 2019
pdf | code | data | poster

Media coverage: Science Node, UCSD CSE News, UCSD JSOE News

A new task of personalized recipe generation to help these users: expanding a name and incomplete ingredient details into complete natural-text instructions aligned with the user's historical preferences.

Improving Neural Story Generation by Targeted Common Sense Grounding
Henry Mao, Bodhisattwa P. Majumder, Julian McAuley, Gary Cottrell
Empirical Methods in Natural Language Processing (EMNLP), 2019
pdf | code

A multi-task learning scheme to achieve quantitatively better common sense reasoning in language models by leveraging auxiliary training signals from datasets designed to provide common sense grounding.

Upcycle Your OCR: Reusing OCRs for Post-OCR Text Correction in Romanised Sanskrit
Amrith Krishna, Bodhisattwa P. Majumder, Rajesh S. Bhat, Pawan Goyal
Conference on Computational Natural Language Learning (CoNLL), 2018
pdf | code+data | supplementary

A state-of-the-art approach towards post-OCR text correction for digitising texts in Romanised Sanskrit. This work was done in a collaboration with CNeRG.

An 'Eklavya' approach to learning Context Free Grammar rules for Sanskrit using Adaptor Grammar
Amrith Krishna, Bodhisattwa P. Majumder, Anil K. Boga, Pawan Goyal
World Sanskrit Conference, 2018
pdf

A non-parametric Bayesian approach for learning (Probabilistic) Context Free Grammar productions for Sanskrit language at word-level supervised tasks such as compound type identification, identification of source and derived words from the corpora for derivational nouns and sentence-level structured prediction. This work was done at CNeRG.

Deep Recurrent Neural Networks for Product Attribute Extraction in eCommerce
Bodhisattwa P. Majumder*, Aditya Subramanian*, Abhinandan Krishnan, Shreyansh Gandhi, Ajinkya More
Preprint, arXiv, 2017
pdf | system description | video

We demonstrate the potential of neural recurrent structures in product attribute extraction by improving overall F1 scores, as compared to the previous benchmarks. This has made Walmart e-commerce achieve a significant coverage of important facets or attributes of products. This work from Walmart Labs later followed by a US patent.

Distributed Semantic Representations of Retail Products based on Large-scale Transaction Logs
Bodhisattwa P. Majumder*, Sumanth S Prabhu*, Julian McAuley
2018
report

We processed 18 million transactions consisting of unique 325,548 products from 1,551 categories to obtain vector representations which preserve product analogy. These representations were effective in identifying substitutes and complements. This work was done at Walmart Labs.

Lolcats meet Philosoraptors - What's in a 'meme'? Understanding the Dynamics of Image Macros in Social Media
Bodhisattwa P. Majumder, Amrith Krishna, Unni Krishnan, Anil K. Boga, Animesh Mukherjee
Preprint, arXiv, 2018
pdf | slides

How similar are the dynamics of meme based communities to that of text based communities? We try to explain the community dynamics by categorising each day based on temporal variations in the user engagement. Work done at CNeRG.

Patents
  • A System for Information Extraction From Form-Like Documents, Google, 2020
  • REDCLAN - RElative Density based CLustering and Anomaly Detection, Wal-mart, 2018
  • Automated Extraction of Product Attributes from Images, Wal-mart, 2018
  • System and Method for Product Attribute Extraction Using a Deep Recurrent System, Wal-mart, 2017
  • Analytical Determination of Competitive Interrelationship between Item Pairs, Wal-mart, 2017
External Collaborators
Experiences
PontTuset

Microsoft Research, Redmond
Summer, 2020
Research Intern with Sudha Rao and Michel Galley at Natural Language Processing Group.

Developed a novel framework that estimates missing 'local' information from the knowledge of a closed-world to generate a useful clarification question. Our work got accepted as a long paper in NAACL '21.

PontTuset

Amazon Alexa Prize
2019-2020
Team Leader of Bernard, UC San Diego

Media Coverage: cnet
Building free-form social conversational agent as a finalist in the Amazon Alexa Prize Challenge 2019-2020 along with 9 other finalist universities. We have been awarded $250,000 to support our research on dialog systems.

PontTuset

Google AI, Mountain View
Summer, 2019
Research Intern with Sandeep Tata and Navneet Potti from Team Juicer.

Media Coverage: Google AI blog, Google Engineering Newsletter (Intern Spotlight)
Developed an Information Extraction Framework for form-like documents using representation learning. The work was published as an Intern spotlight article in the Google-wide Newsletter and is being integrated with Google Cloud's Document AI. Our work got accepted as a long paper in ACL '20.

PontTuset

Walmart Labs
2017-2018
Research Engineer

Developed a neural multimodal attribute tagging framework to improve faceted product using both product description and product images. The work produced 2 US patents and one technical report published in arXiv. Other works on user modeling and product embeddings also have been patented.

Services
Mentees
  • Shivam Lakhotia, MS, CSE @ UCSD
  • Maximilian Halvax and Tatum Maston, Undergraduate, HDSI @ UCSD, as a part of HDSI scholar program
Awards
  • [2021] Receipient of Friends of the International Center Fellowship, UC San Diego
  • [2020] Receipient of Qualcomm Innovation Fellowship, 2020 from North America
  • [2020] Nominated by UC San Diego (one of three from Dept. of CSE) for Microsoft Research PhD Fellowhip 2021
  • [2019] Nominated by UC San Diego (one of two from Dept. of CSE) for Google PhD Fellowhip 2020
  • [2019] Intern Spotlight in Google-wide Engineering Newsletter for summer internship project with the Juicer Team
  • [2019] Team Leader for Team Bernard represeting UC San Diego, a finalist in Alexa Prize 2019; awarded $250,000
  • [2018] Department Fellowship, 1st-year of PhD, Dept. of CSE, UC San Diego
  • [2017] Gold medal and Endowment for the highest academic performance (Rank-1) in Masters, IIT Kharagpur
  • [2016] Finalist, Data Science Game '16, Paris; Represented India (1 out of 3 teams), International Rank 14
  • [2015] Scholarship for academic excellence (obtaining CGPA > 9.5), Indian Statistical Institute
  • [2014] Officially entitled as contributor in NSF-CPS project (CNS -1136040) by PIs, Kansas State University
  • [2011] 4-year scholarship for academic excellence, Ministry of Human Resource & Development, India
Education
PontTuset

PhD, Computer Science and Engineering
University of California, San Diego
2018-Present

Advised by Prof. Julian McAuley on Achieving Conversational Interpretability for Machine Predictions.

PontTuset

MS, Computer Science and Engineering
University of California, San Diego
2018-2020

CGPA: 4.0; Courses: Intro to NLP, Data Mining, Program Synthesis, Deep Learning for Sequences, Probabilistic Reasoning, Intro to Computer Vision, Convex Optimization, Human-centered Programming

PontTuset

MS, Data Science and Machine Learning
Indian Institute of Technology, Kharagpur
2015-2017

Summa cum laude (Gold Medalist); Advised by Prof. Animesh Mukherjee as a part of CNeRG lab. Courses: Algorithms, Intro to ML, Multivariate Analysis, Complex Networks, Information Retrieval

Book: Practical Natural Language Processing by O'Reilly
PontTuset

Practical Natural Language Processing
O'Reilly Media, 2020
Sowmya Vajjala, Bodhisattwa P. Majumder, Anuj Gupta, Harshit Surana
amazon | safari online | website

Practical Natural Language Processing distills our collective wisdom on building real world applications such as data collection, working with noisy data and signals, incremental development of solutions, and issues involved in deploying the solutions as a part of a larger application - bridging a gap between current textbooks and online offerings.

Highlights:

  • Endorsed by Zach Lipton, Sebastian Ruder, Marc Najork et al.
  • #1 Best seller in Amazon.com in Data Mining category
  • #1 New release in Amazon.com in Natural Language Processing category
  • Read and adapted by 20+ AI companies and 6 academic courses internationally

Talks

Grounding Language Generation with World Knowledge | slides

  • [2021] at Microsoft Research, India
  • [2021] at IIT Kharapgur
  • [2020] at NC State, AI Club
  • [2020] at INFORMS 2020, Mining and Learning on Graphs session, Washington, DC

Clarification Question Generation using Global Knowledge | slides

  • [2021] at Microsoft Research, Redmond
  • [2021] at UC San Diego, AI Research Seminar

Personalization, NLP and others

  • [2020] at UC San Diego, CSE Research Open House, on Personalization in Natural Language Generation
  • [2018] at Indian Inst of Management Calcutta, Industry Conclave & Graduate Orientation, on NLP - a primer
  • [2017] at Walmart Labs, on Information Extraction from Images - Application in e-Commerce
  • [2017] at Indian Statistical Institute, on Deep Neural Network: in light of Optimization and Regularization


Thanks to Jon Barron for this nice template! Art by Bekin M ~