Thursday, February 13, 2025

AI safety versus responsible AI by Andrew Ng. An exercise in futile semantics or virtue signalling!

Vice President J.D. Vance is right to dismiss safety for opportunity at the Artificial Intelligence Action Summit in Paris! Keep Big Government out as much as possible!

An excessive recourse to the precautionary principle is the enemy of opportunity and progress! Andrew Ng even admits it in his opinion. Further, he obviously contradicts himself when he lists unsafe medical diagnoses under harmful application.

Andrew Ng even gets much more messier when in the same The Batch weekly newsletter he defends the use of AI for warfare: "AI is transforming military strategy, and refusing to engage with it doesn’t make the risks go away.".
Exactly, the enemies of open societies (rogue states like Russia, Iran, North Korea, China) do not hesitate to exploit AI for war purposes and to gain an advantage.

E.g. what are harmful applications and who determines harmfulness? Most of us probably can reasonably agree that deep fake porn without consent is harmful or even criminal, but what about everything else?

To quote Benjamin Franklin: "'They who can give up essential liberty to obtain a little temporary safety, deserve neither liberty nor safety."

Remember, we might be still driving in horse powered carriages and never got to fly the skies had such overblown concerns bothered our ancestors!

"At the Artificial Intelligence Action Summit in Paris this week, U.S. Vice President J.D. Vance said, “I’m not here to talk about AI safety. ... I’m here to talk about AI opportunity.” I’m thrilled to see the U.S. government focus on opportunities in AI. Further, while it is important to use AI responsibly and try to stamp out harmful applications, I feel “AI safety” is not the right terminology for addressing this important problem. Language shapes thought, so using the right words is important. I’d rather talk about “responsible AI” than “AI safety.” Let me explain.

First, there are clearly harmful applications of AI, such as non-consensual deepfake porn ... the use of AI in misinformation, potentially unsafe medical diagnoses, addictive applications, and so on. We definitely want to stamp these out! There are many ways to apply AI in harmful or irresponsible ways, and we should discourage and prevent such uses. ...

I believe the 2023 Bletchley AI Safety Summit slowed down European AI development — without making anyone safer — by wasting time considering science-fiction AI fears rather than focusing on opportunities. Last month, at Davos, business and policy leaders also had strong concerns about whether Europe can dig itself out of the current regulatory morass and focus on building with AI. ..."

OpenAI Does Deep Research, Google Goes to War, Alibaba Answers DeepSeek, Web Agents Do Tree Search

No comments: