Tristan Harris and Aza Raskin discuss how existing A.I. capabilities already pose catastrophic risks to a functional society, how A.I. companies are caught in a race to deploy as quickly as possible without adequate safety measures, and what it would mean to upgrade our institutions to a post-A.I. world.
This presentation is from a private gathering in San Francisco on March 9th, 2023 with leading technologists and decision-makers with the ability to influence the future of large-language model A.I.s. This presentation was given before the launch of GPT-4.
We encourage viewers to consider calling their political representatives to advocate for holding hearings on AI risk and creating adequate guardrails.
For the podcast version, please visit:
——
Citations:
2022 Expert Survey on Progress in AI:
Seeing Beyond the Brain: Conditional Diffusion Model with Sparse Masked Modeling for Vision Decoding:
High-resolution image reconstruction with latent diffusion models from human brain activity:
Semantic reconstruction of continuous language from non-invasive brain recordings:
Sit Up Straight: Wi-Fi Signals Can Be Used to Detect Your Body Position:
They thought loved ones were calling for help. It was an AI scam:
Theory of Mind Emerges in Artificial Intelligence:
Emergent Abilities of Large Language Models:
Is GPT-3 all you need for low-data discovery in chemistry?
Paper:
Forecasting: AI solving competition-level mathematics with 80%+ accuracy:
ChatGPT reaching 100M users compared with other major tech companies:
Snap:
Percent of large-scale AI results coming from academia:
How Satya Nadella describes the pace at which the company is releasing AI:
The Day After film:
China’s view on chatbots:
Facebook’s LLM leaks online:
Intro music video: “Submarines” by Zia Cora
——
Subscribe to our podcast:
Take our free course on ethical technology:
source