X

Exec Again Warns Unregulated AI Could Lead To Human Extinction

Industry leaders are quite literally warning that the impending AI revolution should be taken as seriously as the threat of nuclear war.

Regulation seems unlikely, because most Congress members don't understand technology and so avoid talking about something that reveals their own ignorance. That's not a dig at them -- it's hard to come by the kind of deep knowledge that's mostly limited to the industry, and Congress members don't have the staff budget to hire those experts at competitive salaries. Via CNN:

On Tuesday, hundreds of top AI scientists, researchers, and others — including OpenAI chief executive Sam Altman and Google DeepMind chief executive Demis Hassabis — again voiced deep concern for the future of humanity, signing a one-sentence open letter to the public that aimed to put the risks the rapidly advancing technology carries with it in unmistakable terms.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” said the letter, signed by many of the industry’s most respected figures.

It doesn’t get more straightforward and urgent than that. These industry leaders are quite literally warning that the impending AI revolution should be taken as seriously as the threat of nuclear war. They are pleading for policymakers to erect some guardrails and establish baseline regulations to defang the primitive technology before it is too late.

You'll notice that tech organizations don't put a lot of thought into whether they should develop dangerous technology -- only if they can. And then, once it's here and they see the implications, they want to lock the barn door when the horses are already out.

More C&L
Loading ...