Opinion: AI's Rocky Revolutionary RoadPublished: 2023-02-10
AI, especially the likes of ChatGPT and Stable Diffusion, has dominated the recent headlines. I wrote about how this rapidly-emerging tech will affect us, what secrets it holds and where it needs to improve. But there is more to unpack after more recent events, and the landscape is almost changing daily. As with any technology with such big potential, there will be players wanting to control it and there will be side-effects that go unnoticed until it is too late. Will the AI revolution be any different?
Changing the Game
"Talking to a computer as naturally as a person will revolutionise the everyday experience of using technology." remarked Eric Boyd, Microsoft's head of AI Platforms . And indeed it will, as long as the revolution is free to unfold to the benefit of the public. Since ChatGPT burst on the scene, attracting over 1 million new users in a week, other giants such as Google have rushed to release their own models, and more will come. When there is a gold-rush such as this, what corners will be cut, what deals will be made and who will be the losers?
OpenAI and Stablity-AI are both examples of 'open' companies; one is a Research & Development company and one is open-source. Or at least that was the case until Microsoft invested $10 billion into OpenAI. How much influence will this give Microsoft? Should we be concerned about the control over the path down which AI is likely to tread?
Rana Foroohar's book "Don't Be Evil" peeked under the hood of how much influence these tech giants have on governments, markets and societies as a whole. Sarah Frier's "No Filter" also took a look at what happens when a genuinely community-driven quality-centric app like Instagram is taken over by a growth-at-all-costs company like Facebook. Both of these books are worth a read before making too many assumptions on the future of AI.
Credit where Credit is Due
There has been a recent burst of lawsuit activity around AI. A class-action against GitHub CoPilot claims infringement of programmer's copyright and Getty Images claims that Stability AI illegally scraped millions of licensed photos to train their Stable Diffusion model.
These suits seem fair at first glance, and I think we will see more of them. After all, the AI models are trained on terabytes of content originally produced by humans. Where would it be without this original library of creativity? Now, unknown to many, the AI models are pushing content back into society for consumption. The Associated Press has been using AI to publish articles for years, and CNET recently did the same.
CNET faced backlash for this and shut the operation down, apparently due to its secretive nature. But there is another angle to large publishers using AI to generate content: what effect will this have on the source? Is this the start of the world's biggest echo-chamber? A Wired article recently asked the question: how long until new models are trained on enough AI-generated content that it then influences its behaviour? To quote the article: "All of these models are about to shit all over their own training data".
Douglas Rushkoff wrote that we tend to "learn what our computers already do instead of what we can make them do", and I think these are important words. AI is going to be wrapped up into neat little products for us to consume, and those products will be influenced by the biggest players to suit their own interests. Without sounding alarmist, we need to be wary of these agendas and mindful of how the new technology can benefit us rather than their owners.
When a revolution comes along, many factors influence their success and many factors go unnoticed until it is too late. It is gradually being revealed that AI is not quite at the stage that the media makes it out to be, but one day it will be. The road to revolution contains many exits - what do we need to learn before we can choose which one to take?