Roman Yampolskiy is an AI safety researcher and author of a new book titled AI: Unexplainable, Unpredictable, Uncontrollable. Please support this podcast by checking out our sponsors:
Yahoo Finance: https://yahoofinance.com
MasterClass: https://masterclass.com/lexpod to get 15% off
NetSuite: http://netsuite.com/lex to get free product tour
LMNT: https://drinkLMNT.com/lex to get free sample pack
Eight Sleep: https://eightsleep.com/lex to get $350 off
TRANSCRIPT:
https://lexfridman.com/romanyampolsk...
EPISODE LINKS:
Roman's X: / romanyam
Roman's Website: http://cecs.louisville.edu/ry
Roman's AI book: https://amzn.to/4aFZuPb
PODCAST INFO:
Podcast website: https://lexfridman.com/podcast
Apple Podcasts: https://apple.co/2lwqZIr
Spotify: https://spoti.fi/2nEwCF8
RSS: https://lexfridman.com/feed/podcast/
Full episodes playlist: • Lex Fridman Podcast
Clips playlist: • Lex Fridman Podcast Clips
OUTLINE:
0:00 Introduction
2:20 Existential risk of AGI
8:32 Ikigai risk
16:44 Suffering risk
20:19 Timeline to AGI
24:51 AGI turing test
30:14 Yann LeCun and open source AI
43:06 AI control
45:33 Social engineering
48:06 Fearmongering
57:57 AI deception
1:04:30 Verification
1:11:29 Selfimproving AI
1:23:42 Pausing AI development
1:29:59 AI Safety
1:39:43 Current AI
1:45:05 Simulation
1:52:24 Aliens
1:53:57 Human mind
2:00:17 Neuralink
2:09:23 Hope for the future
2:13:18 Meaning of life
SOCIAL:
Twitter: / lexfridman
LinkedIn: / lexfridman
Facebook: / lexfridman
Instagram: / lexfridman
Medium: / lexfridman
Reddit: / lexfridman
Support on Patreon: / lexfridman