Artificial Intelligence

Yann LeCun: Human-Level Artificial Intelligence | AI Podcast Clips

This is a clip from a conversation with Yann LeCun from Aug 2019. New full episodes every Mon & Thu and 1-2 new clips or a new non-podcast video on all other days. You can watch the full conversation here: https://www.youtube.com/watch?v=SGSOCuByo24
(more links below)

Podcast full episodes playlist:
https://www.youtube.com/playlist?list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4

Podcasts clips playlist:
https://www.youtube.com/playlist?list=PLrAXtmErZgOeciFP3CBCIEElOJeitOr41

Podcast website:
https://lexfridman.com/ai

Podcast on iTunes:
https://apple.co/2lwqZIr

Podcast on Spotify:
https://spoti.fi/2nEwCF8

Podcast RSS:
https://lexfridman.com/category/ai/feed/

Note: I select clips with insights from these much longer conversation with the hope of helping make these ideas more accessible and discoverable. Ultimately, this podcast is a small side hobby for me with the goal of sharing and discussing ideas. For now, I post a few clips every Tue & Fri. I did a poll and 92% of people either liked or loved the posting of daily clips, 2% were indifferent, and 6% hated it, some suggesting that I post them on a separate YouTube channel. I hear the 6% and partially agree, so am torn about the whole thing. I tried creating a separate clips channelย but the YouTube algorithm makes it very difficult for that channel to grow unless the main channel is already very popular. So for a little while, I’ll keep posting clips on the main channel. I ask for your patience and to see these clips as supporting the dissemination of knowledge contained in nuanced discussion. If you enjoy it, consider subscribing, sharing, and commenting.

Yann LeCun is one of the fathers of deep learning, the recent revolution in AI that has captivated the world with the possibility of what machines can learn from data. He is a professor at New York University, a Vice President & Chief AI Scientist at Facebook, co-recipient of the Turing Award for his work on deep learning. He is probably best known as the founder of convolutional neural networks, in particular their early application to optical character recognition.

Subscribe to this YouTube channel or connect on:
– Twitter: https://twitter.com/lexfridman
– LinkedIn: https://www.linkedin.com/in/lexfridman
– Facebook: https://www.facebook.com/lexfridman
– Instagram: https://www.instagram.com/lexfridman
– Medium: https://medium.com/@lexfridman
– Support on Patreon: https://www.patreon.com/lexfridman

Please follow and like us:
error

Comments (11)

  1. This is a clip from a conversation with Yann LeCun from Aug 2019. New full episodes every Mon & Thu and 1-2 new clips or a new non-podcast video on all other days. If you enjoy it, subscribe, comment, and share. You can watch the full conversation here: https://www.youtube.com/watch?v=SGSOCuByo24
    (more links below)

    Podcast full episodes playlist:
    https://www.youtube.com/playlist?list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4

    Podcasts clips playlist:
    https://www.youtube.com/playlist?list=PLrAXtmErZgOeciFP3CBCIEElOJeitOr41

    Podcast website:
    https://lexfridman.com/ai

    Podcast on iTunes:
    https://apple.co/2lwqZIr

    Podcast on Spotify:
    https://spoti.fi/2nEwCF8

    Podcast RSS:
    https://lexfridman.com/category/ai/feed/

  2. Now we are getting somewhere. This is the most practical talk I have seen on your channel. Good work!

  3. What a fantastic Conversation.

  4. We currently have a President who is stupid 3 ways.

  5. I know of human AI. these guys are shills

  6. A human child is born with a preprogramed system that is designed to go through a series of stages of maturation to achieve adult status. Different stages of this process depend on programming that causes the child to spend time training different parts of its intelligent system. The program for this process has evolved over hundreds of millions of years from the first Metazoan organism with a bilateral body plan. Humans are distinguished by a particularly long period of time devoted to this process and by more potential for experience to modulate the outcome of the process. But, the process is still rooted in heritable complex biological goals. It is easy to see reinforcement learning as a model for this kind of process. But, that kind of learning starts with some kind of model of what is to be trained. In humans it depends on an information processing system that still appears to be much more powerful than current electronic computers for the kinds of tasks that it is particularly good at.

  7. Is something wrong with audio?

  8. It's not a sequence of peaks, it's more like a big mountain with many crevices and vertical walls of different size as obstacles along the way to the summit. Initially when you are far away you can only see the big mountain and plan a general route to the summit, then as you approach you bump into the smaller obstacles. In that respect, Allen Newell's General Problem Solver was the big mountain, it formalized the basic principle for solving problems – the graph search. That principle is like the basis of AI, it tells you that it is possible for a computer program to solve any problem without having a prior explicit algorithm for that particular problem, which is the essence of intelligence in general, and gives the approach how to do that – graph search. That's why I guess they got so excited back then when they discovered it and thought they had solved AI and that we would have general purpose AI in 10 years. Then they bumped into the first crevice – graph search is exponential, and 50 years later we are still trying to cross over that crevice. Throwing tons of data or petaflops of computing power at it won't solve it, it's the nature of exponential problems, brute force approach don't work there.

  9. ๐Ÿ‘๐Ÿ‘ Lex and Yann Thank You both for this illuminating discussion. ๐Ÿ‘Œ๐Ÿ‘Œ

    My thoughts: Present day ML is mostly derived from mathematical Calculus methods closely related to partial derivatives for gradient descent – which seems to consume crazy amounts of examples and even then nearly Zero transfer learning to future novel configurations i.e. Not actually retaining learning – which is not at all like human Babies/Childrens YOLO ;
    .. and after viewing this video I'm motivated to add just now that almost all the ML models ( except for maybe Hinton capsule networks ) so far almost completely ignore abstraction/encapsulation/chunking up data, at least at the "correct", that is to say "human recognizable/verifiable" symbolic granularity

    So similar to Hinton I believe AGI is more likely to evolve from neuroscience ..(contact me for the missing ingredients of the secret sauce ๐Ÿ˜˜).. if UofT neurotech guys would just listen to me ๐Ÿ‘๐Ÿ˜

    .. If it works when we get to it – Either I'll let you know what I find out, or let She tell you herself ๐Ÿ˜Š

  10. A related article:
    Clark, A. (2013). Whatever next? Predictive brains, situated agents, and the future of cognitive science.ย Behavioral and brain sciences,ย 36(3), 181-204.

  11. Will human level Ai, if it is ever built to be socialized… will it have inherent restrictions like humans do?

Comment here