Research Engineer
Meta AI
prajj at meta dot com

Bio

I’m Praj, I work as an AI Researcher at Meta AI in the Generative AI team working on building foundational models, next generation of LLaMA models. I am a core contributor of LLaMA 3 LLaMA 2, LLaMA 2 Long, powering Meta’s flagship AI assistant meta.ai. Previously I worked as an AI Resident within Reality Labs and Fundamental AI Research (FAIR) working on Offline Reinforcement Learning. My google scholar can be found here.

Prior to Meta, I was a CS graduate student at the University of Texas Dallas where I worked on commonsense reasoning under the supervision of Prof. Vincent Ng. My thesis is about improving commonsense reasoning through adversarial learning.

Publications


The LLaMA 3 herd of models

Generative AI, Meta
Paper


Effective Long-Context Scaling of Foundation Models

W. Xiong, J. Liu, I. Molybog, H. Zhang, P. Bhargava , R. Hou, L. Martin, R. Rungta, K. Sankararaman, B. Oguz, M. Khabsa, H. Fang, Y. Mehdad, S. Narang, K. Malik, A. Fan, S. Bhosale, S. Edunov, M. Lewis, S. Wang, H. Ma
Paper


Llama 2: Open Foundation and Fine-Tuned Chat Models

H. Touvron, L. Martin, K. Stone, P. Albert, A. Almahairi, Y. Babaei, N. Bashlykov, S. Batra, P. Bhargava , S. Bhosale, D. Bikel, L. Blecher, C. Ferrer, M. Chen, G. Cucurull, D. Esiobu, J. Fernandes, J. Fu, W. Fu, B. Fuller, C. Gao, V. Goswami, N. Goyal, A. Hartshorn, S. Hosseini, R. Hou, H. Inan, M. Kardas, V. Kerkez, M. Khabsa, I. Kloumann, A. Korenev, P. Koura, M. Lachaux, T. Lavril, J. Lee, D. Liskovich, Y. Lu, Y. Mao, X. Martinet, T. Mihaylov, P. Mishra, I. Molybog, Y. Nie, A. Poulton, J. Reizenstein, R. Rungta, K. Saladi, A. Schelten, R. Silva, E. Smith, R. Subramanian, X. Tan, B. Tang, R. Taylor, A. Williams, J. Kuan, P. Xu, Z. Yan, I. Zarov, Y. Zhang, A. Fan, M. Kambadur, S. Narang, A. Rodriguez, R. Stojnic, S. Edunov, T. Scialom
Paper Official Announcement Code


Sequence Modeling is a Robust Contender for Offline Reinforcement Learning

Prajjwal Bhargava, Rohan Chitnis, Alborz Geramifard, Shagun Sodhani, Amy Zhang
International Conference on Learning Representations (ICLR) 2024 arXiv Code Bibtex


AUTODIAL: Efficient Asynchronous Task-Oriented Dialogue Model

Prajjwal Bhargava, Pooyan Amini, Shahin Shayandeh, Chinnadhurai Sankar
arXiv Code Bibtex


DiscoSense: Commonsense Reasoning with Discourse Relations

Prajjwal Bhargava and Vincent Ng
EMNLP 2022 arXiv Code Bibtex


Commonsense Knowledge Reasoning and Generation with Pre-trained Language Models: A Survey

Prajjwal Bhargava and Vincent Ng
AAAI 2022 Paper Poster Bibtex


Generalization in NLI: Ways to [Not] Go Beyond Simple Heuristics

Prajjwal Bhargava, Aleksander Drozd, Anna Rogers
EMNLP Workshop on Insights from Negative Results 2022 Paper Code (Huggingface) Code (Pytorch Lightning) Bibtex Presentation video Poster Slides


Adaptive Transformers for Learning Multimodal Representations

Prajjwal Bhargava
ACL SRW 2022 Paper Code Bibtex Presentation Video


On Generalization of Detection Models for Unconstrained Environments

Prajjwal Bhargava
ICCV AutoNUE Workshop 2022 Paper Code Bibtex Poster


Incremental Learning in Person Re-Identification

Prajjwal Bhargava
arXiv preprint Paper Code Bibtex Poster


Side projects

fluence

Winner of Pytorch Global Hackathon 2020. A Pytorch deep learning library focussed on providing support for compute efficient and debiasing algorithms in transformer based model for NLP research. Contains implementation of Adaptive Attention, Sparsity, Layerdrop, Debiasing, Pruning utilities etc.

Open source contributions

Contributions to Pytorch Ecosystem

Autonomous Object Detection

This project focussed on 2D object detection with Pytorch. User can leverage models provided from `torchvision` and use datasets provided in this project (`idd`, `cityscapes`, `bdd`) for training and evaluation of models. Additionally, support for incremental learning was added.