Google is excited to be a Diamond Sponsor of Empirical Methods in Natural Language Processing (EMNLP 2023), a prestigious annual conference taking place this week in Sentosa, Singapore. We have a strong presence at the conference, with over 65 accepted papers and active involvement in 11 workshops and tutorials. Additionally, we are proud to be a Major Sponsor for the Widening NLP workshop (WiNLP), which focuses on promoting diversity and inclusivity in AI and ML research.
We invite you to visit the Google booth at the conference to meet our researchers who are at the forefront of NLP innovation. Our booth will have various activities, including demos and Q&A sessions. For more details about our booth activities, you can follow our @GoogleAI X (Twitter) and LinkedIn accounts.
Below is a list of the research being presented by Google at EMNLP 2023. Please note that the schedule is subject to change, so we recommend visiting the Google booth for the most up-to-date information.
– “Adaptation with Self-Evaluation to Improve Selective Prediction in LLMs” by Jiefeng Chen*, Jinsung Yoon, Sayna Ebrahimi, Sercan O Arik, Tomas Pfister, Somesh Jha
– “A Comprehensive Evaluation of Tool-Assisted Generation Strategies” by Alon Jacovi*, Avi Caciularu, Jonathan Herzig, Roee Aharoni, Bernd Bohnet, Mor Geva
– “1-PAGER: One Pass Answer Generation and Evidence Retrieval” by Palak Jain, Livio Baldini Soares, Tom Kwiatkowski
– “MaXM: Towards Multilingual Visual Question Answering” by Soravit Changpinyo, Linting Xue, Michal Yarom, Ashish V. Thapliyal, Idan Szpektor, Julien Amelot, Xi Chen, Radu Soricut
– “SDOH-NLI: A Dataset for Inferring Social Determinants of Health from Clinical Notes” by Adam D. Lelkes, Eric Loreaux*, Tal Schuster, Ming-Jun Chen, Alvin Rajkomar
– “Machine Reading Comprehension Using Case-based Reasoning” by Dung Ngoc Thai, Dhruv Agarwal, Mudit Chaudhary, Wenlong Zhao, Rajarshi Das, Jay-Yoon Lee, Hannaneh Hajishirzi, Manzil Zaheer, Andrew McCallum
– “Cross-lingual Open-Retrieval Question Answering for African Languages” by Odunayo Ogundepo, Tajuddeen Gwadabe, Clara E. Rivera, Jonathan H. Clark, Sebastian Ruder, David Ifeoluwa Adelani, Bonaventure F. P. Dossou, Abdou Aziz DIOP, Claytone Sikasote, Gilles HACHEME, Happy Buzaaba, Ignatius Ezeani, Rooweither Mabuya, Salomey Osei, Chris Chinenye Emezue, Albert Kahira, Shamsuddeen Hassan Muhammad, Akintunde Oladipo, Abraham Toluwase Owodunni, Atnafu Lambebo Tonja, Iyanuoluwa Shode, Akari Asai, Anuoluwapo Aremu, Ayodele Awokoya, Bernard Opoku, Chiamaka Ijeoma Chukwuneke, Christine Mwase, Clemencia Siro, Stephen Arthur, Tunde Oluwaseyi Ajayi, Verrah Akinyi Otiende, Andre Niyongabo Rubungo, Boyd Sinkala, Daniel Ajisafe, Emeka Felix Onwuegbuzia, Falalu Ibrahim Lawan, Ibrahim Said Ahmad, Jesujoba Oluwadara Alabi, CHINEDU EMMANUEL MBONU, Mofetoluwa Adeyemi, Mofya Phiri, Orevaoghene Ahia, Ruqayya Nasir Iro, Sonia Adhiambo
– “On Uncertainty Calibration and Selective Generation in Probabilistic Neural Summarization: A Benchmark Study” by Polina Zablotskaia, Du Phan, Joshua Maynez, Shashi Narayan, Jie Ren, Jeremiah Zhe Liu
– “Epsilon Sampling Rocks: Investigating Sampling Strategies for Minimum Bayes Risk Decoding for Machine Translation” by Markus Freitag, Behrooz Ghorbani*, Patrick Fernandes*
– “Sources of Hallucination by Large Language Models on Inference Tasks” by Nick McKenna, Tianyi Li, Liang Cheng, Mohammad Javad Hosseini, Mark Johnson, Mark Steedman
– “Don’t Add, Don’t Miss: Effective Content Preserving Generation from Pre-selected Text Spans” by Aviv Slobodkin, Avi Caciularu, Eran Hirsch, Ido Dagan
– “What Makes Chain-of-Thought Prompting Effective? A Counterfactual Study” by Aman Madaan*, Katherine Hermann, Amir Yazdanbakhsh
– “Understanding HTML with Large Language Models” by Izzeddin Gur, Ofir Nachum, Yingjie Miao, Mustafa Safdari, Austin Huang, Aakanksha Chowdhery, Sharan Narang, Noah Fiedel, Aleksandra Faust
– “Improving the Robustness of Summarization Models by Detecting and Removing Input Noise” by Kundan Krishna*, Yao Zhao, Jie Ren, Balaji Lakshminarayanan, Jiaming Luo, Mohammad Saleh, Peter J. Liu
– “In-Context Learning Creates Task Vectors” by Roee Hendel, Mor Geva, Amir Globerson
– “Pre-training Without Attention” by Junxiong Wang, Jing Nathan Yan, Albert Gu, Alexander M Rush
– “MUX-PLMs: Data Multiplexing for High-Throughput Language Models” by Vishvak Murahari, Ameet Deshpande, Carlos E Jimenez, Izhak Shafran, Mingqiu Wang, Yuan C
Source link