BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Elon Musk And Other Tech Leaders Call For Slowdown On AI Development

Following

In an open letter signed by Elon Musk, Apple co-founder Steve Wozniak, Israeli futurist Yuval Noah Harari and others, a 6-month pause is outlined for the rapidly expanding world of AI tools. Dominated by ChatGPT and other advancing platforms, unprecedented growth is raising concerns over safety standards and societal implications. The letter explains that AI is advancing in ways that no one - not even the technology’s creators - can predict or control. “Powerful AI systems,” the letter says, “should be developed only once we are confident that their effects will be positive and their risks will be manageable.” Ominously, these leaders acknowledge potential changes to something no less than the history of life on earth. “AI systems with human-competitive intelligence can post profound risks to society and humanity,” the letter begins, “as shown by extensive research and acknowledged by top AI labs.”

Meanwhile, companies like Microsoft have embraced AI technology, for its Bing search engine and other offerings. In a recent report, Microsoft says GPT-4, the latest available version of OpenAI’s tool, can solve “novel and difficult tasks” with “human level performance”, in advanced careers such as coding, medicine, law and even psychology. Google released Bard, an AI-based chatbot and rival service. Tech giants Adobe, Salesforce and Zoom are at the forefront of incorporating advanced AI tools. Are these companies, and others, exposing themselves (and by extension, their customers) to risk? And if so, what are the dangers associated with ChatGPT and its cousins?

Max Tegmark, a physics professor at MIT (Massachusetts Institute of Technology), was one of the organizers of the letter. He tells the Wall Street Journal that advances in AI have surpassed what many experts believed was possible, even as recently as just a few years ago. “It’s unfortunate to frame this as an arms race,” Tegmark tells the Journal, “It is more of a suicide race.” That’s a bold claim, but Tegmark explains, “It just means that humanity as a whole could lose control of its own destiny.” Wasn’t that the log line for the last Terminator movie?

Indeed, that control is already beginning to slip, according to an extensive jobs survey completed by the University of Pennsylvania - examining the impact of large language models (LLMs) such as ChatGPT on various careers. According to this ongoing study, updated just yesterday, most jobs will be changed by GPTs in some way, with “80% of workers in occupations where at least one job task can be performed quickly by generative AI”. Most at risk, according to the Wall Street Journal, are accountants - as the research shows that at least half of all accounting tasks could be completed much faster with the emerging technology.

Matt Beane studies the impact of new technology at the University of California, Santa Barbara. An assistant professor at the school, Beane says that human beings reject change that compromises their interests. Sounds logical, but notice that his argument implies an element of choice. As OpenAI and others race toward a brave new world, where machines create, control and modify the conversation, the real impacts are beyond our current imagination. Indeed, Forbes has captured concerns around cybersecurity, plagiarism, and misinformation campaigns using chatbots. These potentially sinister applications might be launched and leveraged in ways that could harm many, with implications that can’t even be fully explained right now.

The Technology Is Becoming the User

Indeed, the challenge with AI and ChatGPT is not a new one. Namely, how can a new technology be made safe, and maintained in a way that minimizes sinister or harmful actions? The wrinkle with machine learning is that the machine is no longer just a tool. The technology is becoming the user. What happens when the technology advances beyond prompts? The chatbot already makes choices, in order to respond to prompts. How long will it be before these new tools start making decisions as well?

The letter from Musk and others offers stern warnings regarding an unbounded and unpredictable technology. Created with the intention of expanding the human experience, that expansion (according to the leaders in the letter) is expanding beyond our current control. AI tech like ChatGPT is delivering unexpected consequences, and potentially dangerous implications. As the conversation engine becomes more intelligent, and more expansive, uncertainty expands as well. The opportunity to create and control the conversation is shifting into the hands of something that doesn’t have hands. Governance and restraint is what these technology leaders are calling for, in their letter: “AI research and development should be refocused on making today's powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.”

Makes sense, right? Accurate and safe, we all want that. But how do you code for loyalty, and trustworthiness? Maybe somebody should ask ChatGPT that question.

Follow me on Twitter or LinkedInCheck out my website or some of my other work here