Human history is full of doomsday scenarios! Why should it be any different this time!
There is already an enormous rush by legislators and regulators in many countries to restrict AI.
AI is not the problem, but e.g. bad actors are (e.g. Russia, Iran, North Korea, China and criminals, terrorists, or fanatics).
See also my blog post on the future of robotics.
"Angst at the prospect of intelligent machines boiled over in moves to block or limit the technology.
What happened: Fear of AI-related doomsday scenarios prompted proposals to delay research and soul searching by prominent researchers. Amid the doomsaying, lawmakers took dramatic regulatory steps.
Driving the story: AI-driven doomsday scenarios have circulated at least since the 1950s, when computer scientist and mathematician Norbert Weiner claimed that “modern thinking machines may lead us to destruction.” Such worries, amplified by prominent members of the AI community, erupted in 2023. ...
Striking a balance: AI has innumerable beneficial applications that we are only just beginning to explore. Excessive worry over hypothetical catastrophic risks threatens to block AI applications that could bring great benefit to large numbers of people. Some moves to limit AI would impinge on open source development, a major engine of innovation, while having the anti-competitive effect of enabling established companies to continue to develop the technology in their own narrow interest. It’s critical to weigh the harm that regulators might do by limiting this technology in the short term against highly unlikely catastrophic scenarios.
Where things stand: AI development is moving too quickly for regulators to keep up. It will require great foresight — and a willingness to do the hard work of identifying real, application-level risks rather than imposing blanket regulations on basic technology — to limit AI’s potential harms without hampering the good that it can do."
No comments:
Post a Comment