Senior executives at OpenAI, the creators of ChatGPT, last week called for sharper regulation of the rise of so-called super-intelligent AI in an open letter.
The fact that the creators of the now infamous ChatGPT are publicizing concerns themselves, is a significant shift in the ongoing debate on the ethics of artificial intelligence utility.
Calls for an international regulatory body to oversee the industry, draw parallels to the Atomic Energy Agency established back in 1957 for nuclear technology, where the same principles of safe, secure and peaceful use were established.
The need for a standardized approach towards innovation in AI is fast becoming an urgent matter. AI is currently causing havoc everywhere from plagiarism issues in academia to winning the World Photography Awards back in April, with humans unable to distinguish essays or images produced by a genuine human.
It is not out of the realm of possibility that within a decade AI has the potential to exceed expert skill levels in many areas. Automation may adversely affect human experiences and diminish the value of skills.
Sam Altman, chief executive of OpenAI, stated that the sheer productive potential of superintelligence means that we could experience a dramatically more prosperous future; however, in order to reach this, society will have to contend with risks and potentially huge paradigm shifts in how we value the merits of AI performance.
The public statement from OpenAI, released last week, called for "some degree of coordination" among tech companies who are innovating AI applications at rapid speeds. How this should be done in practice is not clear. Safety and privacy should always be respected as the priority; however, controlling the smooth integration of services to enter society is paramount so as not to become chaotically disruptive. Humans, unlike AI, need time to adjust to significant job market changes, and any AI tools that can significantly alter people's lives and economies must be introduced with forethought.
Anxieties over the disturbing consequences of unregulated AI innovation have spread across the industry, with the Future of Life Institute, an NGO, calling for a 6-month pause in the creation of so-called superintelligent AI forms. The reasoning being that this may be enough time to gather enough information about what is being developed, where and how, so that the industry can become proactive instead of reactive in the face of new developments.
An open letter from the institute was counter signed by many anxious tech figureheads, not in the least Elon Musk, owner of SpaceX, Twitter and Tesla. The general mood in the tech sphere is an anxious one, with Musk keen to distance himself from the current developmental wild west by stating that AI is not necessary for any SpaceX ventures.
The shift from excitement to taboo in unregulated AI in the sector is a significant one. Stricter regulation moving forwards is universally agreed; however, how to do so is still contested. The proposed 6-month armistice on AI development is currently being criticized by those who say that the move will fall directly into the hands of rogue agents, developing sinister applications away from the mainstream spotlight. The conversation surrounding the potential implications is gaining momentum, and must accelerate if we are to benefit from such a powerful innovation while keeping ourselves safe.
Barry He is a London-based columnist for China Daily.
The World Internet Conference (WIC) was established as an international organization on July 12, 2022, headquartered in Beijing, China. It was jointly initiated by Global System for Mobile Communication Association (GSMA), National Computer Network Emergency Response Technical Team/Coordination Center of China (CNCERT), China Internet Network Information Center (CNNIC), Alibaba Group, Tencent, and Zhijiang Lab.