- Machine Learning Times
- Posts
- đą Is OpenAI in Trouble?
đą Is OpenAI in Trouble?
An đ˛ Anti-GPT-5 movement begins.
Hey Friends,
ChatGPT has changed our lives, but not everyone is convinced it will turn out well.
A.I. and the Future of Jobs
Goldman Sachs thinks A.I. could improve GDP by 7%. Meanwhile OpenAI says 80% of workers could see their jobs impacted by AI. What they donât tell you is how many jobs will be lost and in what sectors exactly, though they have a fairly good idea.
With GPT-5 coming, what will the risks to humanity be? Goldman Sachs reported this week in a sobering and alarming report about AI's ascendance. The investment bank estimates 300 million jobs could be lost or diminished by this fast-growing technology.
Goldman contends automation creates innovation, which leads to new types of jobs. But what might we be missing here? OpenAI researchers estimate(Opens in a new window) that ChatGPT and future software tools built with the program could impact at least 50% of the tasks necessary for around 19% of the jobs in the US. Does this account for AGI and GPT-5 like systems made by OpenAI or all of A.I. as a whole?
This week thereâs been some movement against the speed of A.Iâs monopoly development as well as potentially severe A.I. risks. Some argue that even pausing A.I. development after GPT-4 is not enough, since the risks are just too great.
I had previously reported on the Open-Letter by those asking for a great 6 month pause of the training of A.I. models. I wonder if Sam Altman should be more scared of regulators than of what supposed AGI with GPT-5 will do to his company?
A new complaint to the Federal Trade Commission urges the agency to investigate OpenAI and suspend its commercial deployment of large language models such as ChatGPT. However after so much hype, some might argue ChatGPT is as beloved as TikTok, who are fighting the threat of a ban.
The Open Letter to Pause A.I. training (namely GPT-5) has more and more key signatures, and this week has been tumultuous regarding the A.I. risks involved in these breakhough large language models and their impact on society. Midjourney itself had to stop free trials after the so-called abuse by bad actors.
It is a special moment in time for A.I., you might say:
These images are not real:
The complaint against OpenAI, is by the nonprofit research group Center for AI and Digital Policy, accuses OpenAI of violating a part of the FTC Act that prohibits unfair and deceptive business practices, and the agencyâs guidance for AI products, reported CNBC.
According to some reports GPT-5 may even be finished training in 2023, though I could not independently verify those claims. The Open Letter tries to frame Trust and Safety in A.I. as more important than its immediate progress, but critics point out that some corporations are worried how far behind they are OpenAI and Microsoft.
To make matters worse, it turns out Googleâs Bard was literally in part trained on ChatGPT. Top Google AI researcher Jacob Devlin resigned earlier this year after he warned Alphabet CEO Sundar Pichai and other top executives that the company's ChatGPT competitor, Bard, was being trained on data from OpenAI's chatbot, according to a recent report from The Information. Google itself somehow denies this.
Itâs difficult to separate the hype, the hypocrisy and the real risk of these suddenly improving LLMs. For the most gullible A.I. optimistics, OpenAI is the reaching the holy grail of sentience, while for many of us itâs just bulls*it propaganda. While entertaining, itâs nearly as if misinformation and advertising is being confused in perhaps the greatest bubble (that is Generative A.I.) of our times.
The Obstacles to Trust, Safety and Regulation
America doesnât appear to have the resolve to ban TikTok, nor I imagine will it regulate A.I. that could pose future dangerous with any real sense of due diligence. If OpenAI and Microsoftâs work would actually pause for six months, it would give many other competitors time to catch up, including Chinese companies and the CCPâs own use of the technology.
The A.I. arms race thus pushes us ever closer to some hypothetical brink of having control over Generative A.I, or some speculative exponential technological singularity where AGI becomes possible, perhaps even in the next decade.
LessWrong would usually be a good source of information about this. But between personalities like Elon Musk and Eliezer Yudkowsky, the scripts of their arguments have already been set for years. For the record, Ray Kurzweil predicted 15 years ago that the singularityâthe time when the abilities of a computer overtake the abilities of the human brainâwill occur in about 2045. We are a long ways from 2045, so why are people saying GPT-5 is the model to bring us AGI? 2023 is 22 years from the supposed Singularity, where artificial super intelligence (ASI) would become possible.
Googleâs Brain AI group are working with employees at DeepMind, Known internally as Gemini, to take on OpenAI, as they clearly suspect they are many months behind. Perhaps even more than the six months the Open Letter proposes we âpause trainingâ of models more sophisticated than GPT-4.
Beyond Data has sent this Newsletter a lot of traffic, so please check them out as well. We are peers supporting each other here.
|
Watch the interview with Lex Fridman, kind of worth it:
The Problem of Potential Super Intelligence of ASI
Yudkowsky argues (on March 29th, 2023) in his op-ed:
Meanwhile will OpenAI face legal suits regarding its morbid centralization of power and wild experimentation seemingly endorsed by Microsoft? The complaint, made public by the nonprofit research group Center for AI and Digital Policy on Thursday March 30th, 2023, accuses OpenAI of violating Section 5 of the FTC Act, which prohibits unfair and deceptive business practices, and the agencyâs guidance for AI products.
The way OpenAI is operating to me signifies Microsoftâs anti-competitive behavior of recent years and an incredible greed to centralize A.I. power among corporations, namely Microsoft, Google, Meta, Nvidia, Amazon and a few others. The tech heavy NASDAQ has gone up 15% so far in 2023, in a surprise bull-market, probably mostly on the back of Generative A.I. hype.
There is so much misinformation about OpenAI, and whether claims that ChatGPT has passed the Turing Test are much exaggerated. At every turn one must deal with AGI hype now on Twitter and LinkedIn, conveniently owned by none other than Elon Musk and Microsoft.
Do you think GPT-5 by OpenAI could potentially have bad outcomes? |
Social Media Information Continues to Degrade into the Wild
Unfortunately this means sentiment amplification on advertising based social media is so wide scale around A.I. make it makes the information for the most part unreliable. Corporate greed in Monopoly capitalism appears to mean training for GPT-5 like models is so expensive only a few companies such as Google, Microsoft or Meta would be able to afford it all talent and R&D being equal. Having BigTech have a hegemony on the technology for their own gain is also highly problematic.
For example, Bing AI recently decided it was a good idea to insert ads. The result is likely not to be great for the internet. How Microsoft wonât face antitrust lawsuits I have no idea, regulators are sleep at the wheel from everything from bank regulation to BigTech since quite a few years.
The bias in the foundational models like OpenAI is doing with GPT (Generative Pre-trained Transformer), even claiming to be a General Purpose Technology (GPT) thatâs somehow going to alter our existence inside and out is somewhat baffling. Startups like OpenAI need to most quickly to have a first-mover advantage, but with Transformer models this could be dangerous in any number of ways.
CAIDP calls GPT-4 âbiased, deceptive, and a risk to privacy and public safety.â The group says the large language model fails to meet the agencyâs standards for AI to be âtransparent, explainable, fair, and empirically sound while fostering accountability.â
Meanwhile in Italy, as reported by the Verge, Italian regulators order ChatGPT ban over alleged violation of data privacy laws. As for Google, is Google or Jacob Devlin right about how Bard was trained? There are so many plot twists and conspiracies now and namely a state of misinformation on what A.I. is, how it works and what it will become in the near future.
The CAIDP complaint points out potential threats from OpenAIâs GPT-4 generative text model, which was announced in mid-March. According to the Verge, they include ways that GPT-4 could produce malicious code and highly tailored propaganda as well as ways that biased training data could result in baked-in stereotypes or unfair race and gender preferences in things like hiring. So if this is just GPT-4, what happens when GPT-5 comes out? Clearly something must be done, but by whom and how.
Microsoft recently fired an important A.I. ethics team in its supposed A.I. for Good and Responsible A.I. framework and policies. We may be close to a point of no-return regarding the existential threat of A.I. just like we have been with climate change, yet for some reason most people are cheering the A.I. on and not aware of its potential risks and dangers. Thatâs just how the corporations would prefer it to be honest.
Why A.I. is Likely to Go Rogue in the Decades Ahead
Iâm not sure Petitions, Complaints to the FTC or worries are going to stop A.I. from becoming whatever it can be and for civilization to face the consequences, not at least in this state of monopoly capitalism and an A.I. arms race between the U.S. and China.
In fact, given the state of the world Iâd have more faith in China regulating AGI and even ASI given their capabilities to actually regulate their BigTech corporations and enforce actual rules around the internet and algorithms.
There is a growing stark lack of leadership and oversight regarding the development of large language models and their Generative A.I. technology. We cannot expect OpenAI and Microsoft to do their due diligence given what Bing AI had delivered and shown, one of the first products trained on GPT-4, even before it was publically available commercially to others.
Perhaps the petition the complaint though do give us much needed pause to wonder at the risks of LLMs. The group wants the FTC to require OpenAI establish a way to independently assess GPT products before theyâre deployed in the future. It also wants the FTC to create a public incident reporting system for GPT-4 similar to its systems for reporting consumer fraud. It also wants the agency to take on a rulemaking initiative to create standards for generative AI products.
OpenAI have said all the right things, but apparently are hurrying ahead training GPT-5 as fast as possible. Even to the point of Alphabetâs DeepMind division (who were refused more independence) to help the Google Brain team beat OpenAI with a new initiative called Gemini. The A.I. arms race between Microsoft and Google is nefarious enough. And like the Twins of Gemini, could lead us down a rabbit-hole that could one day have severe consequences.
Image credit: Jakub Porzycki | Nurphoto | Getty Images
All is fair in the war for A.I. supremacy in 2023. The last week (of March, 2023) has shown a narrative of hype gone a bit sideways. A spirit of A.I. optimism betrayed. Now voice cloning with Generative A.I. tools is one of the most common phishing fraud schemes. ChatGPT has likely ushered in a less safe world, and one in which Microsoft wants to steal advertising revenue from Google, one in which Satya Nadella has probably no clue what the future of A.I. might actually become that could cloud his legacy (no pun intended).
If GPT-4 is this âgoodâ, how âbad could GPT-5 be for the world?â Google stooped so low as to train Bard using data from OpenAIâs ChatGPT, scraped from a website called ShareGPT. But what is OpenAI doing in secret and what might be its dangers? Sam Altman must know. But the entire chapter is so far from open, few people (if any) even know how some of the A.I. does what it does.
Want to Share Machine Learning Times? Iâm going to be putting in more incentives to do just that.