The hypocritical call to pause giant AI

 

The recent open letter calling for a pause in giant AI experiments correctly identifies a number of risks associated with the development of AI, including job losses, misinformation, and loss of control. However, its call to pause some types of AI research for six months smacks of hypocrisy.

First, the call targets mainly the OpenAI company and the GPT models it develops, providing time for other companies to catch up. Given that GPT-4 is currently thought to be the most powerful AI model, the call “on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4” photographs OpenAI’s offering, allowing other companies (many listed as affiliations of the letter’s co-signers) to improve their offerings until they match those of the front-runner, which has caught many large tech players in their sleep. If those who drafted the letter truly wanted to minimize the risks of AI development, they could have set the bar regarding the models’ power significantly lower. This would incorporate in the ban many more companies that perform advanced AI research, including those that need time to catch up, which are now conveniently excluded. The letter should also have asked for a longer pause, because, let’s face it, democracies can’t legislate the advocated regulations in the period of the prescribed pause.

Second, the letter’s prompt for us to question whether we should “automate away all the jobs, including the fulfilling ones” is too little too late. Over the past decades automation and information technology have eliminated the jobs of millions, ranging from factory workers who got displaced by robots to booksellers and travel agents rendered useless by e-commerce sites. Insinuating that the knowledge economy jobs threatened by ChatGPT and its potential cousins are somehow more meaningful than the jobs already lost is insulting to those who suffered in the past from technology disruptions. It is also a bit rich coming from highly privileged knowledge workers who in the past would have disparaged earlier such moves as Luddism.

It is true that the progress of AI in the past months has been breathtaking, constantly exhibiting ever more sophisticated human skills. However, AI’s (mostly unproven) potential risks aren’t more serious than those of fossil fuel power sources, which have set us on a proven path to destroy our entire planet through global warning. From Prometheus to the internal combustion engine and atomic energy humanity has always played with fire, often benefited immensely, and dealt with the consequences later. Let’s not pretend that this time it should be different only because those threatened aren’t conveniently far from us.

Comments   Toot! Share


Last modified: Thursday, March 30, 2023 8:15 pm

Creative Commons Licence BY NC

Unless otherwise expressly stated, all original material on this page created by Diomidis Spinellis is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.