Training data for Large Language Models: how can we collect it ethically and study it?
May 2 2023: Invited talk at the Linguistic Circle of Copenhagen
“Writing Assistance or PlagAIrism? How Language Models Are Changing Our View of Knowledge
April 29 2023: Invited talk at the Complexity of Knowledge symposium (Santa Fe Institute)
Machine Reading, Fast and Slow: When Do Models “Understand” Language?
April 24 2023: Invited talk at the AI and the Barrier of Meaning seminar (Santa Fe Institute)
Data governance and transparency for Large Language Models: lessons from 🌸 BigScience Workshop
March 30 2023: Invited talk at the AI UK Fringe (Queen’s University Belfast, online)
Data governance and transparency for Large Language Models: lessons from 🌸 BigScience Workshop
February 16 2023: Invited talk at the Institute for Advanced Sociology (Linköping Uni), [SLIDES]
The Peer Review Process and Widening NLP
11 November 2021: Panel with Bahar Mehmani and Cecilia Superchi at Widening NLP (co-located with EMNLP 2021)
Generalization in NLI: Ways (Not) To Go Beyond Simple Heuristics
10 November 2021: presentation at Workshop on Insights from Negative Results co-located with EMNLP 2021
Just what do you think you’re doing, Dave?
10 November 2021: presentation at NLLP workshop co-located with EMNLP 2021
Changing the world by changing the data
09 November 2021: invited talk at Data-centric AI Day (France is AI) [SLIDES]
Panel Discussion on Trustworthy NLP (Google)
13 October 2021: Panel with Kellie Webster and Hannach Wallach at Google’s Trustworthy NLP Workshop
Changing the world by changing the data
21 September 2021: invited talk at Machine Learning for NLP (Toronto ML Series) [SLIDES]
Changing the world by changing the data
3 August 2021: oral talk at ACL 2021 [SLIDES]
Reviewing Natural Language Processing research.
29 June 2021: tutorial at TALN 2021 (with Kevin Cohen, Karën Fort, Margot Mieskes and Aurélie Névéol)
A primer in BERTology: what we know about how BERT works
June 17 2021: invited talk at L3-AI[SLIDES]
A primer in BERTology: what we know about how BERT works
June 8 2021: presentation at NAACL 2021[SLIDES]
The quest for difficult benchmarks in question answering and reading comprehension.
7 May 2021: invited talk at LTI Colloquium at Carnegie Mellon University [URL][SLIDES]
Reviewing Natural Language Processing research.
20 April 2021: tutorial at EACL 2021 (with Kevin Cohen, Karën Fort, Margot Mieskes and Aurélie Névéol)
2020
A guide to the dataset explosion in QA, NLI, and commonsense reasoning. 13 Dec 2020: Tutorial at COLING 2020 (online). [URL][SLIDES]
When BERT plays the lottery, all tickets are winning. 20 Nov 2020: invited talk at BlackBox NLP (online). [URL]
How Much Should Conversational AI Developers know about ML and Linguistics? 16 Jun 2020: The Level 3 AI Assistant Conference, panel discussion with Emily M. Bender, Thomas Wolf, and Vladimir Vlasov (online). [URL]
Towards AI Complete Question Answering: Combining Text-based, Unanswerable and World Knowledge Questions
11 December 2019: Allen Institute for Aritficial Intelligence (Seattle, USA).
Text Representations Learning and Compositional Semantic (ACML 2019 tutorial) November 17 2019: Nagoya, Japan [URL]
The dark secrets of BERT 11 November 2019: RIKEN Center for Computational Science (Tokyo, Japan).
Word embeddings: 6 years later
22 May 2019: UMass Amherst (USA). [SLIDES]
2018
What’s in your embedding, and how it predicts task performance. 27 September 2018: UMass Amherst (USA). [SLIDES], [VIDEO]. A version of this talk was also presented on August 30 2018 at IT University of Copenhagen (Denmark).
Distributional compositional semantics in the age of word embeddings.
7 May 2018: Tutorial at LREC 2018, Miyazaki, Japan. [URL]
Detecting linguistic relations with analogies: what works and what doesn’t. July 15 2016: Google Tokyo seminar, Tokyo, Japan. [SLIDES]