Cognitive and computational building blocks for more human-like language in machines

Duration: 1 hour 23 mins
Share this media item:
Embed this media item:


About this item
Image inherited from collection
Description: Online talk at the 2020 Cambridge Language Sciences Annual Symposium by Professor Josh Tenenbaum (Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology)

Chair: Dr Andrew Caines (Dept. of Computer Science & Technology, University of Cambridge)

Respondent: Dr Guy Emerson (Executive Director, Cambridge Language Sciences)
 
Created: 2020-12-08 19:27
Collection: Cambridge Language Sciences
Language Sciences Annual Symposium 2020 - What Next? Future Directions in Language Research
Publisher: University of Cambridge
Copyright: Joshua B. Tenenbaum
Language: eng (English)
Distribution: World     (downloadable)
Keywords: language learning; machine learning; cognitive science; AI;
Credits:
Producer:  Venue AV
Categories: iTunes - Language
iTunes - Engineering - Computer Science
iTunes - Language - Linguistics
iTunes - Psychology & Social Science
Explicit content: No
Aspect Ratio: 4:3
Screencast: No
Bumper: UCS Default
Trailer: UCS Default
 
Abstract: Speaker abstract: Humans learn language building on more basic conceptual and computational resources that we can already see precursors of in infancy. These include capacities for causal reasoning, symbolic rule formation, rapid abstraction, and commonsense representations of events in terms of objects, agents and their interactions. I will talk about steps towards capturing these abilities in engineering terms, using tools from hierarchical Bayesian models, probabilistic programs, program induction, and neuro-symbolic architectures. I will show examples of how these tools have been applied in both cognitive science and AI contexts, and point to ways they might be useful in building more human-like language, learning and reasoning in machines.

Respondent: I find this work exciting because it shows how three disciplines (cognitive science, machine learning, and linguistics) can mutually support each other. While the talk seemed primarily motivated in terms of how machine learning and linguistics can be used to build better cognitive models, I also see the potential for building better machine learning models and better linguistic models. In the spirit of furthering this three-way conversation, I ask three questions, with one focusing on each discipline. From a cognitive point of view, I ask how we might model intuitive physics when it is at odds with real physics. From a linguistic point of view, I ask how we might generalise the proposed approach to learning grounded lexical semantics. From a machine learning point of view, I ask when we might expect human-like solutions for a task to be general solutions.
Available Formats
Format Quality Bitrate Size
MPEG-4 Video 640x360    1.57 Mbits/sec 981.11 MB View Download
WebM 640x360    440.15 kbits/sec 267.58 MB View Download
iPod Video 480x360    485.37 kbits/sec 295.07 MB View Download
MP3 44100 Hz 250.93 kbits/sec 152.55 MB Listen Download
Auto * (Allows browser to choose a format it supports)