What is the Price of "Sparks" of AGI?
A.I. Ethics teams are getting the boot.
Microsoft's Responsible A.I. Behavior
Best time to Start an A.I. Newsletter
If you want to write a Newsletter in datascience, technology or A.I, I think you should just do it! In 2023, Substack is the best discovery, growth and retention platform to gain new audiences and build valuable niche communities.
I’ll be thrilled if you started yet another A.I. Newsletter, honestly the more points of view the better. We’re all learning together.
A pricey bet in Generative A.I. with the potential for A.I. ethics disaster.
I've started a new Subreddit for A.I. links and tools, come and share articles with me.
I now have a dedicated A.I. collective for jobs with Pallet, come and say hi. If you are a job seeker go here, and if you are an A.I. company with jobs go here.
Do you believe due diligence is being taken in A.I. ethics by Microsoft and OpenAI?
So let's get into it....
I got to admit, I’m starting to get pissed.
I'm noticing a pattern of BigTech firing A.I. ethics when things could get messy. Things are really messy by the way in 2023. But let's not have our own A.I. researchers telling us that, okay?
Where will Surveillance Capitalism end up on the fence of regulating its own greed in arguably the biggest A.I. bubble in history? I'm not liking what I'm seeing from our supposed leaders. Now Microsoft Research (like Google previously) are claiming inklings ("sparks") of AGI. Will "AI for Good" with AGI please stand up!
In January, 2023 Microsoft said it’s letting go of 10,000 employees as the software maker braces for slower revenue growth. What we didn’t know before its huge gamble on Generative A.I, likely up to around $15 billion now, is that it was cutting an entire team in its Responsible A.I department.
Microsoft laid off an entire team dedicated to guiding AI innovation that leads to ethical, responsible and sustainable outcomes. The cutting of the ethics and society team, as reported by Platformer. This is I consider an important story Substack based Platform broke for us.
Microsoft just laid off one of its responsible AI teams
Google has been far more cautious at the onset of the Generative A.I. moment and I think for good reason. I believe Microsoft has with its investment in OpenAI cut some pretty sharp corners.
Microsoft claims supposedly very high standards of Responsible A.I. Publically Microsoft says it “believes that when you create powerful technologies, you also must ensure that the technology is developed and used responsibly.”
TechCrunch sums it up also when it says, which I find concerning as well:
The move calls into question Microsoft’s commitment to ensuring its product design and AI principles are closely intertwined at a time when the company is making its controversial AI tools available to the mainstream.
Microsoft’s investment in Generative A.I. is almost too big to fail, however why would you cut part of your ethical A.I. team at the same time? The costs are higher than is generally made public.
Platformer reported that Microsoft still maintains an active Office of Responsible AI, which is tasked with creating rules and principles to govern the company’s AI initiatives. The company says its overall investment in responsibility work is increasing despite the recent layoffs.
When Microsoft Corp. invested $1 billion in OpenAI in 2019, it agreed to build a massive, cutting-edge supercomputer for the artificial intelligence research startup. Later it invested $10 billion while revealing that it has actually invested up to $3 Billion in OpenAI already. So the costs of getting their first are adding up, but what about the Responsible A.I. consequences?
The Responsible A.I. (RAI) team had been recently working to identify risks posed by Microsoft’s integration of OpenAI’s technology across its suite of products. Now with ChatGPT API and GPT-4 coming, between OpenAI that’s just a startup and Microsoft’s race with Google, it feels as if the hype is over-taking the ethical considerations. How can they possibly keep up if the team is even smaller than it was before? Something just doesn’t quite add up here.
Just like occured at Google, when the RAI team says things the Executives don’t like, they usually get fired. To Platformer, employees said the ethics and society team at Microsoft played a critical role in ensuring that the company’s responsible AI principles are actually reflected in the design of the products that ship.
As Generative A.I. is taken more private with OpenAI, what happens to real transparency here? Silicon Valley has a long history of firing ethical A.I. researchers who get a bit too critical. It seems now Microsoft is dangerously close to that point where it could hurt its brand reputation if anything goes wrong. Microsoft’s Prometheus Model is supposed to keep thing safe however BingAI had had a lot of issues on launch. I want a copilot to the web I can trust, where I know the company is investing in RAI responsibly to a degree that reflects its investment in the hype.
Now with this news, I’m not sure Microsoft falls into good actor category of being a so-called category leader of introducing Generative A.I. into its software suite and products.
Platformer revealed that:
This is not how you do Responsible A.I.
So you are to tell me that the fate of A.I. ethics with Microsoft is the responsibility of just 7 people? This is how much the company actually prioritizes RAI, very different from the sort of PR it puts out. This is extremely underwhelming for those of us who care about the future of A.I. ethics. I don’t believe this reflects a company that’s putting its principles into practice. And what does this say about trust & safety in Generative A.I. in BigTech as a whole?
As of 2022, Microsoft employed approximately 221,000 people in full-time positions worldwide. Around 60 percent of Microsoft's employees are located in the company's home country the United States. And just 7 now work in Responsible A.I.? That’s not exactly doing your due diligence on the potential harmful impact of your Generative A.I. splurge.
Usually what happens in situations like these is Executives veto what the A.I. researchers in ethics actually recommend. Platformer continues:
Simply by investing OpenAI to this degree Microsoft made many trust & safety errors and sacrifices creating an A.I. arms race to use these technologies in new ways. So far as ChatGPT was hyped, so too will does this magnify Microsoft’s lapse of judgement and sound investment in Responsible A.I. I warn you all, this will have profound consequences to the entire ecosystem and maybe the future.
As generative AI tools such as ChatGPT gain interest from businesses and consumers, more pressure will be put on cloud services providers like Microsoft, Amazon.com Inc. and Alphabet Inc.’s Google to ensure their data centers can provided the enormous computing power needed. Which in turn leads to more cutting of Responsible A.I. corners that will eventually add up to significant problems.
Show me the Sparks of AGI bro!
Sam Altman had this to say with Kara Swisher.
A Steep Price for Productivity
Since Microsoft wants Bing to be more profitable with Ads, it’s also going to steal more of your data. Whether you understand what using ChatGPT means or not. While having partial ownership of OpenAI, it can do some pretty radical things with the startup that benefits itself in advertising and the cloud and it reeks of corporate greed without fair consideration for the development of the consequences of a stark lack of Responsible A.I. and ethics considerations.
I don’t even think Casey of Platformer (also published on the Verge) goes far enough in his reporting. He doesn’t connect the dots with what this means for OpenAI’s own ethics and practices.
To be honest, the efforts of BigTech in Responsible A.I. have always looked like more of a gimmick and a public relations stunt. Without A.I. regulation that’s serious and unbiased by corporate agendas, we aren’t going to get anywhere in making sure A.I. brings good into the world. Microsoft has also conducted various “A.I. for Good” campaigns. Those 7 people at Microsoft are hopefully really good, though I doubt if they were they’d actually be listened to. I don’t believe Tech companies can regulate themselves. It’s all pretty absurd.
What happens to foundational technologies like LLMs and MLLMs when they aren’t regulated? GPT-4 is just about to come out. Without best practices around responsible A.I (RAI), we are heading for a Generative A.I. hype bubble with some unfortunate consequences. Microsoft’s reputation will be and is at stake.
Last year, the reorganization saw most of the ethics and society team transferred to other teams. On March 6, 2023, John Montgomery, corporate vice president of AI, told the remaining members that they’d be eliminated after all. So in truth Microsoft is eliminating a voice inside of its organization around A.I. ethics. This is how companies behave “under pressure”?
“The pressure from [CTO] Kevin [Scott] and [CEO] Satya [Nadella] is very very high to take these most recent openAI models and the ones that come after them and move them into customers hands at a very high speed,” he said, according to audio of the meeting obtained by Platformer.
It’s really embarrassing for Microsoft. I’m sure like Meta they will be doing more layoffs. BigTech are bloated with apparently, employees they don’t actually need. There’s even been talk about over-hiring to do fake work. Was RAI fake work? Is Responsible A.I. a joke to Microsoft?
You can afford to build OpenAI a massive supercomputer and give them a $10 billion extension of funding, but you cannot afford a Responsible A.I. team that you will actually listen to?
The high cost of machine learning is an uncomfortable reality in the industry as venture capitalists eye companies that could potentially be worth trillions. The “pressure” it seems is allowing even the best companies among us to cut corners in trust and safety. Does Satya feel pressure in starting an A.I arms race with Google and Amazon in the Cloud-wars?
Microsoft wanted to even disrupt Google with Bing to serve us more Ads, search Ads are by far the most lucrative niche. We can’t have A.I. researchers complaining about safety now can we? In this manner Monopoly Capitalism will lead to some dangerous outcomes and if we go back in time, in history, these are one of those critical junctions of how we failed to regulate A.I., until one day it becomes impossible to do so. Some analysts and researchers believe we’ve already reached that point.
Eliminated by A.I. for Executives who feel the “Pressure”
Sorry Microsoft, I knew your A.I. for Good movement was a sham the entire time. But now you’ve really been caught and I’m sure the employees are talking. I’ve been warning about the winner takes all Capitalism and the intersection with A.I. for a while now. My work is not well read or ever goes viral.
You have to to feel for the people who actually believed their work mattered:
One employee says the move leaves a foundational gap on the user experience and holistic design of AI products. “The worst thing is we’ve exposed the business to risk and human beings to risk in doing this,” they explained.
What can you even say? What can you conclude? Analysts and technologists estimate that the critical process of training a large language model such as OpenAI’s GPT-3 could cost more than $4 million. But how much will lapses in RAI principles and accountability cost humanity in the future?
There are no estimations of the cost the centralization of A.I. decisions on humanity’s freedom. The FTC in the U.S. has failed to protect us and there is no legal body in the U.S. or globally that can put A.I. right again. If we manage to create AGI, we do so at great risk.
What are we even doing to ensure that A.I. ethics and A.I. Governance actually matters? It's a black box and OpenAI and Microsoft don't even know what they are triggering.
I hope BigTech knows what its doing prioritizing greed over safety. Those “Sparks” of AGI are getting smart!