The Future of Life Institute (FLI) works to maximize the benefits and minimize the risks of artificial intelligence (AI), biotechnology, and nuclear weapons. It focuses on long-term threats such as human extinction and on ways to harness scientific breakthroughs to benefit humanity. Its main activities are advocacy and sponsored research. 1 2
Advocacy
Future of Life Institute lobbies to increase federal spending for artificial intelligence safety research and to “strengthen” the NIST AI Risk Management Framework. 3 It endeavors to fortify the European Union (EU) AI Act and to encourage member states to support a treaty on autonomous weapons. At the United Nations (UN), it promotes creation of a legally binding instrument on autonomous weapons. 4
FLI has published seven open letters addressed to world leaders calling attention to threats to humanity’s future and recommending countermeasures. 5
Its most recent letter targets the combined risks of climate change, pandemics, nuclear weapons, and “ungoverned” AI. It argues that “governments can get to work now to agree how to finance the transition to a safe and healthy future powered by clean energy, relaunch arms control talks to reduce the risk of nuclear war, save millions of lives by concluding an equitable pandemic treaty, and start to build the global governance needed to make AI a force for good, not a runaway risk.” The letter, jointly drafted with The Elders, was signed by former heads of state, Nobel laureates, UN officials, celebrities, and executives of global nonprofits including the president of Open Society Foundations (Open Society Institute). 6
Research and Contests
Future of Life Institute issues Requests for Proposals (RFPs) and runs contests on a variety of future-impacting topics. Recent RFPs call for “designs for global institutions governing AI” and “proposals evaluating the impact of AI on Poverty, Health, Energy and Climate SDGs.” 7 Previous grant awards ranged from $22,000 to a $1,401,000. It offers postdoctoral fellowships and funds film productions at partner organizations. 8
Partnerships
Future of Life Institute’s key collaborators are the Future Society, the Organization for Economic Co-operation and Development (OECD), Center for Human Compatible Artificial Intelligence, Center for a New American Security, IEEE, and Association for the Advancement of Artificial Intelligence. 4
It participates in the U.S. National Institute of Standards and Technology AI Safety Institute Consortium, the Partnership on AI, and the Forum for Cooperation on Artificial Intelligence, a network hosted by Brookings Institution and the Centre for European Policy Studies (CEPS). 2
Funding
Future of Life Institute has received funding from the Musk Foundation, Silicon Valley Community Foundation, and the effective altruism group Founders Pledge. 9
Vitalik Buterin, co-founder of the cryptocurrency Ethereum, donated $665.8 million in cryptocurrency to FLI in 2021. 10 11
FLI claims its 2023 income was $624,714 with $600,000 donated by one individual. The remainder of its operating deficit was covered by Buterin’s donation, most of which was converted to endowment. What remained supported a multi-year grant program focused on minimizing “existential risks.” 2
Controversies
Emile Torres, a science philosopher, 12 accuses Future of Life Institute of embracing TESCREALism, the attempt to re-engineer the human species through artificial intelligence to enable humanity to become immortal, colonize the universe, and build a post-human civilization among the stars. Torres alleges many TESCREALists support pouring billions into AI with no regulation. Others, such as FLI, embrace the goal but are alarmed by what can go wrong along the way. 13
Other scientists accuse FLI of prioritizing future threats over immediate concerns such as job losses and of serving technology conglomerates whose interests diverge from those of the public and whose power over governmental decisions is the real threat. 14 15 They assert that FLI’s proposed regulatory limits on open-source AI models would lock in tech giant advantages by making it difficult for start-ups to compete. 10
Leadership
Max Tegmark is professor of physics at MIT, scientific director of the Foundational Questions Institute (FQXi), and president of Future of Life Institute. He has authored 200 technical papers, two books, and dozens of science documentaries on precision cosmology, quantum information, the physics of intelligence, and links between physics and machine learning. He received a B.Sc. in physics from the Royal Institute of Technology and an M.A. and Ph.D. in physics from the University of California, Berkeley. 16 17
References
- Future of Life Institute. “Our Mission.” Accessed May 21, 2024. https://futureoflife.org/our-mission/
- “Organisation Detail – European Union.” Accessed May 21, 2024. https://transparency-register.europa.eu/searchregister-or-update/organisation-detail_en?id=787064543128-10
- “AI Risk Management Framework.” NIST, July 12, 2021. Accessed May 21, 2024. https://www.nist.gov/itl/ai-risk-management-framework
- Future of Life Institute. “Policy Work.” Accessed May 21, 2024. https://futureoflife.org/our-work/policy-work/
- Future of Life Institute. “FLI Open Letters.” Accessed May 22, 2024. https://futureoflife.org/fli-open-letters/
- Future of Life Institute. “Open Letter Calling on World Leaders to Show Long-View Leadership on Existential Threats.” Accessed May 22, 2024. https://futureoflife.org/open-letter/long-view-leadership-on-existential-threats/
- Jones, Will. “Realising Aspirational Futures – New FLI Grants Opportunities.” Future of Life Institute (blog), February 14, 2024. Accessed May 22, 2024. https://futureoflife.org/environment/realising-aspirational-futures-new-fli-grants-opportunities/
- Future of Life Institute. “Grantmaking Work.” Accessed May 22, 2024. https://futureoflife.org/our-work/grantmaking-work/
- Archive, View Author, Email the Author, Follow on Twitter, and Get author RSS feed. “Musk, Experts Urge Pause on AI Systems, Citing ‘Risks to Society,’” March 29, 2023. Accessed May 22, 2024. https://nypost.com/2023/03/29/musk-experts-urge-pause-on-ai-systems-citing-risks-to-society/
- Bordelon, Brendan. “The little-known AI group that got $660 million.” Politico, March 26, 2024. Accessed May 22, 2024. https://www.politico.com/news/2024/03/25/a-665m-crypto-war-chest-roils-ai-safety-fight-00148621
- Candid. “Future of Life Institute Received $665 Million in Crypto.” Philanthropy News Digest (PND). Accessed May 22, 2024. https://philanthropynewsdigest.org/news/future-of-life-institute-received-665-million-in-crypto
- Émile P. Torres. “Émile P. Torres.” Accessed May 23, 2024. https://www.xriskology.com
- Torres, Émile P. “TESCREALism: The Acronym Behind Our Wildest AI Dreams and Nightmares.” Truthdig, June 15, 2023. Accessed May 23, 2024. https://www.truthdig.com/articles/the-acronym-behind-our-wildest-ai-dreams-and-nightmares/
- julia. “The Call for an AI Halt Disguises the Real Problems with Tech.” TechCentral.Ie (blog), March 31, 2023. Accessed May 23, 2024. https://www.techcentral.ie/the-call-for-an-ai-halt-disguises-the-real-problems-with-tech/
- Coulter, Martin. “AI Experts Disown Musk-Backed Campaign Citing Their Research.” Reuters, April 5, 2023, sec. Technology. Accessed May 23, 2024. https://www.reuters.com/technology/ai-experts-disown-musk-backed-campaign-citing-their-research-2023-03-31/
- MIT Physics. “Max Tegmark » MIT Physics.” Accessed May 23, 2024. https://physics.mit.edu/faculty/max-tegmark/
- “Max Tegmark | Edge.Org.” Accessed May 23, 2024. https://www.edge.org/memberbio/max_tegmark