The Center for AI Safety (CAIS) is a technology-regulation advocacy organization. It aims to conduct research and produce policy recommendations for state-level officials to regulate artificial intelligence (AI) and mitigate potential risks.
Activities
In February 2024, California Senator Scott Wiener (D-San Francisco) introduced the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB 1047). The bill included the creation of a government agency called the Frontier Model Division, as well as mandates for AI tech to include “full shutdown” kill switches across all copies and models, annual compliance check-ins, and a requirement to report all “safety incidents” (which have yet to be defined as of June 2024). 1
In May 2024, the California Senate passed the bill. 2
It was also announced in May 2024 that the bill was “sponsored” by the Center for AI Action Fund, Economic Security Action California (a subsidiary of Economic Security Project Action), and Encode Justice. 3
“California is showing the nation how to balance AI innovation with safety by establishing clear, predictable, common sense legal standards for AI companies. We thank the Senators who supported this pioneering bill,” wrote Nathan Calvin, senior policy counsel at the Center for AI Safety Action Fund, in a press release. “As the home to many of the largest and most innovative AI companies, California should be leading the way with industry-leading policies for safe, responsible AI while also making this incredible technology accessible to academic researchers and startups to encourage innovation and competition.” 4
Financials
The Center for AI Safety purports to be a 501(c)(3) organization, but its registration status with the Internal Revenue Service (IRS) is not clear. Its sister 501(c)(4) organization, the Center for AI Safety Action Fund, is registered with the IRS but has yet to file a tax return as of June 2024. Its determination letter, filed September 2023, states that its person of contact is George B. Bell, and that the organization is located on Montogomery Street, San Francisco. 5
Leadership
As of June 2024, the Center for AI Safety was led by executive and research director Dan Hendrycks. Oliver Zhang was listed as a co-founder of the group. 6
References
- “CA SB1047 | 2023-2024 | Regular Session | Amended.” LegiScan.com. Accessed June 10, 2024. https://legiscan.com/CA/text/SB1047/2023.
- Post. Senator Scott Wiener – X, May 21, 2024. Accessed June 10, 2024. https://x.com/Scott_Wiener/status/1793102136504615297.
- “In A Bipartisan Vote, Senate Passes Senator Wiener’s Landmark AI Safety And Innovation Bill.” Scott Wiener – Senate CA, May 21, 2024. Accessed June 10, 2024. https://sd11.senate.ca.gov/news/bipartisan-vote-senate-passes-senator-wieners-landmark-ai-safety-and-innovation-bill.
- “In A Bipartisan Vote, Senate Passes Senator Wiener’s Landmark AI Safety And Innovation Bill.” Scott Wiener – Senate CA, May 21, 2024. Accessed June 10, 2024. https://sd11.senate.ca.gov/news/bipartisan-vote-senate-passes-senator-wieners-landmark-ai-safety-and-innovation-bill.
- “Final Determination Letter – Center for AI Safety Action Fund.” Internal Revenue Service, September 29, 2023. Accessed June 10, 2024. https://apps.irs.gov/pub/epostcard/dl/FinalLetter_93-2442608_CENTERFORAISAFETYACTIONFUNDINC_09062023_00.pdf
- “About Us.” Center for AI Safety. Accessed June 10, 2024. https://www.safe.ai/about.