Preparing for an AI Economy (with Daniel Susskind)
On this episode, Daniel Susskind joins me to discuss disagreements between AI researchers and economists, how we can best measure AI’s economic impact, how human values can influence economic outcomes, what meaningful work will remain for humans in the future, the role of commercial incentives in AI development, and the future of education. You can learn more about Daniel's work here: https://www.danielsusskind.com Timestamps: 00:00:00 Preview and intro 00:03:19 AI researchers versus economists 00:10:39 Measuring AI's economic effects 00:16:19 Can AI be steered in positive directions? 00:22:10 Human values and economic outcomes 00:28:21 What will remain for people to do? 00:44:58 Commercial incentives in AI 00:50:38 Will education move towards general skills? 00:58:46 Lessons for parents
--------
1:03:37
Will AI Companies Respect Creators' Rights? (with Ed Newton-Rex)
Ed Newton-Rex joins me to discuss the issue of AI models trained on copyrighted data, and how we might develop fairer approaches that respect human creators. We talk about AI-generated music, Ed’s decision to resign from Stability AI, the industry’s attitude towards rights, authenticity in AI-generated art, and what the future holds for creators, society, and living standards in an increasingly AI-driven world. Learn more about Ed's work here: https://ed.newtonrex.com Timestamps: 00:00:00 Preview and intro 00:04:18 AI-generated music 00:12:15 Resigning from Stability AI 00:16:20 AI industry attitudes towards rights 00:26:22 Fairly Trained 00:37:16 Special kinds of training data 00:50:42 The longer-term future of AI 00:56:09 Will AI improve living standards? 01:03:10 AI versions of artists 01:13:28 Authenticity and art 01:18:45 Competitive pressures in AI 01:24:06 Priorities going forward
--------
1:27:14
AI Timelines and Human Psychology (with Sarah Hastings-Woodhouse)
On this episode, Sarah Hastings-Woodhouse joins me to discuss what benchmarks actually measure, AI’s development trajectory in comparison to other technologies, tasks that AI systems can and cannot handle, capability profiles of present and future AIs, the notion of alignment by default, and the leading AI companies’ vague AGI plans. We also discuss the human psychology of AI, including the feelings of living in the "fast world" versus the "slow world", and navigating long-term projects given short timelines. Timestamps: 00:00:00 Preview and intro00:00:46 What do benchmarks measure? 00:08:08 Will AI develop like other tech? 00:14:13 Which tasks can AIs do? 00:23:00 Capability profiles of AIs 00:34:04 Timelines and social effects 00:42:01 Alignment by default? 00:50:36 Can vague AGI plans be useful? 00:54:36 The fast world and the slow world 01:08:02 Long-term projects and short timelines
--------
1:15:49
Could Powerful AI Break Our Fragile World? (with Michael Nielsen)
On this episode, Michael Nielsen joins me to discuss how humanity's growing understanding of nature poses dual-use challenges, whether existing institutions and governance frameworks can adapt to handle advanced AI safely, and how we might recognize signs of dangerous AI. We explore the distinction between AI as agents and tools, how power is latent in the world, implications of widespread powerful hardware, and finally touch upon the philosophical perspectives of deep atheism and optimistic cosmism.Timestamps: 00:00:00 Preview and intro 00:01:05 Understanding is dual-use 00:05:17 Can we handle AI like other tech? 00:12:08 Can institutions adapt to AI? 00:16:50 Recognizing signs of dangerous AI 00:22:45 Agents versus tools 00:25:43 Power is latent in the world 00:35:45 Widespread powerful hardware 00:42:09 Governance mechanisms for AI 00:53:55 Deep atheism and optimistic cosmism
--------
1:01:28
Facing Superintelligence (with Ben Goertzel)
On this episode, Ben Goertzel joins me to discuss what distinguishes the current AI boom from previous ones, important but overlooked AI research, simplicity versus complexity in the first AGI, the feasibility of alignment, benchmarks and economic impact, potential bottlenecks to superintelligence, and what humanity should do moving forward. Timestamps: 00:00:00 Preview and intro 00:01:59 Thinking about AGI in the 1970s 00:07:28 What's different about this AI boom? 00:16:10 Former taboos about AGI 00:19:53 AI research worth revisiting 00:35:53 Will the first AGI be simple? 00:48:49 Is alignment achievable? 01:02:40 Benchmarks and economic impact 01:15:23 Bottlenecks to superintelligence 01:23:09 What should we do?
The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change.
The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions.
FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.