鶹ý

Skip to content

Shelly Palmer - California's "Prevent AI from Destroying Humanity Act of 2024"

Shelly Palmer has been named LinkedIn’s “Top Voice in Technology,” and writes a popular daily business blog.
watchingtv
I think the legislators in California may have binge-watched one too many episodes of "Black Mirror."

Do we need AI regulation? Can we articulate a specific danger? If so, how far should the regulations go? Who will they protect? How will they be enforced? How can we accomplish the protections we need and still foster and encourage innovation? These are all rational questions that, one would hope, lawmakers are asking themselves as they craft our laws.

I'll let you be the judge, but as I read the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (), which may become law later this month, I think the legislators in California may have binge-watched one too many episodes of "Black Mirror." The legislation requires developers of large AI models to implement safety measures before training, including the capability for full shutdown, safety and security protocols, and risk assessments. It prohibits using or releasing models that pose "unreasonable risk" of causing critical harm and mandates annual third-party audits of compliance starting in 2028. The bill also creates a new Frontier Model Division within the California government for oversight. This all sounds great, but the devil is in the details. (It's not a quick read, but it's worth your time.)

SB 1047 targets very large AI models, defined by compute power used in training (over $100 million worth), and focuses on models that could potentially cause significant harm or threats to public safety. Enforcement measures include civil penalties up to 30% of model training costs for violations, with the Attorney General authorized to bring civil actions against violators.

As you can imagine, the bill has sparked controversy and debate. Critics argue that a group of science-fiction-loving, well-meaning politicians with a vague understanding of machine learning should not be guessing how the future of AI may unfold. Supporters contend SB 1047 is necessary to mitigate risks from powerful AI systems. I'd like to know what you think, but please read the bill first (link above), then reply to this email. -s

P.S. If you're interested in learning how AI models are currently being created and used, and if you're particularly interested in the way big tech is thinking about its path to AGI, you're welcome to take our free online course, . We just added five new lessons to the course that will help you jumpstart your AI journey.


ABOUT SHELLY PALMER

Shelly Palmer is the Professor of Advanced Media in Residence at Syracuse University’s S.I. Newhouse School of Public Communications and CEO of The Palmer Group, a consulting practice that helps Fortune 500 companies with technology, media and marketing. Named  he covers tech and business for , is a regular commentator on CNN and writes a popular . He's a , and the creator of the popular, free online course, . Follow  or visit . 

push icon
Be the first to read breaking stories. Enable push notifications on your device. Disable anytime.
No thanks