The death of the AI safety movement?

 [[{“value”:”That is the topic of my latest Bloomberg column, and here is one part: The safety movement probably peaked in March 2023 with a petition for a six-month pause in AI development, signed by many luminaries, including specialists in the AI field. As I argued at the time, it was a bad idea, and got nowhere. Fast forward
The post The death of the AI safety movement? appeared first on Marginal REVOLUTION.”}]] 

That is the topic of my latest Bloomberg column, and here is one part:

The safety movement probably peaked in March 2023 with a petition for a six-month pause in AI development, signed by many luminaries, including specialists in the AI field. As I argued at the time, it was a bad idea, and got nowhere.

Fast forward to the present. Senate Majority Leader Chuck Schumer and his working group on AI have issued a guidance document for federal policy. The plans involve a lot of federal support for the research and development of AI, and a consistent recognition of the national-security importance of the US maintaining its lead in AI. Lawmakers seem to understand that they would rather face the risks of US-based AI systems than have to contend with Chinese developments without a US counterweight. The early history of Covid, when the Chinese government behaved recklessly and nontransparently, has driven this realization home.

No less important is the behavior of the major tech companies themselves. OpenAI, Anthropic, Google and Meta all released major service upgrades this spring. Their new services are smarter, faster, more flexible and more capable. Competition has heated up, and that will spur further innovation.

Do note this:

The biggest current obstacles to AI development are the hundreds of pending AI regulatory bills in scores of US states. Many of those bills would, intentionally or not, significantly restrict AI development, as for instance one in California that would require pre-approval for advanced systems. In the past, this kind state-level rush has typically led to federal consolidation, so that overall regulation is coordinated and not too onerous. Whether a good final bill results from that process is uncertain, but Schumer’s project suggests that the federal government is more interested in accelerating AI than hindering it.

Safety work of course will continue under many guises, both private and governmental, but “AI safetyism,” as an intellectual movement, has peaked.

This piece was drafted before some of the recent controversies at OpenAI, and the argument does not rely on any particular interpretation of those events, one way or the other.

Arnaud Schenk has some useful clarifications, especially for those who cannot read the full column.

Here is a version of the column, adapted for Don Mclean’s song “American Pie.”

The post The death of the AI safety movement? appeared first on Marginal REVOLUTION.

 Uncategorized, Web/Tech 


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *