Why AI is Never Purely Good, Bad, or Boring
Written by Ossi Syd.
I read an article on the Finnish Broadcasting Company Yle’s website (in Finnish) that draws a parallel between artificial intelligence and digital asbestos. For someone who’s founded two AI consulting firms, such a claim brings on an unpleasant, constricting feeling in the chest:
Am I, through my work, destroying people’s health in ways that will take decades to fix – maybe without even being aware of the risks, just as with asbestos?
Development is for the millions
Let’s start with a bit of background so you’ll understand how I’ve built my perspective.
Throughout human history, technological advancement has allowed us all to break free from the sole focus on food production. Technology has made work more efficient and created added value that people can use to fulfill their personal needs and dreams. Just 200 years ago, this added value was so microscopic for most people that it was barely noticeable amid the miserable conditions of the first factories. As a side effect, we’ve gotten sharp hierarchies, slavery, and climate change, but also culture, science, and human well-being. Millions have had the chance to lift their hands from the soil, look to the sky, and dream.
Steel that makes sturdy plows can also be forged into swords. The dual use of nuclear power for energy and weapons is a cliché even elementary schoolers can recite. There are dozens, if not hundreds, of similar analogies.
It’s difficult to name a single technology that is fundamentally evil (or good). Even asbestos wasn’t developed to intentionally harm people.
Statistically, it’s unlikely that we now live in the singular moment where technology’s inherent neutrality breaks – where everything suddenly turns ‘bad.’
Perhaps as a counterreaction to the technocratic worldview and the culture of ‘tech bros,’ some people now signal techno-skepticism as a badge of intellectualism, authenticity, and human-centric values. Yet, interestingly, most of these skeptics aren’t heading to Stone Age caves to become hunter-gatherers or even to smoky cabins to grow turnips. Typically, people want to roll the clock back just 40–60 years – to a technological level that feels just right.
I wonder if, in 1965, anyone thought that ‘now technology is perfectly suited for humans, and we should just stop here’?
How does AI fit into all this?
Let’s get back to AI. The OECD defines AI as follows:
“An AI system is a machine-based system that, for explicit or implicit objectives, makes inferences from input it receives and produces outputs such as predictions, content, recommendations, or decisions which can influence physical or virtual environments. The level of autonomy and adaptability of AI systems after deployment varies.”
So, in practice, we’re usually talking about a computer program that interprets input more versatilely, requires less deterministic steering, and can operate more autonomously and adaptively than traditional software.
The impact of artificial intelligence should not be mystified. It’s not a little person in a box, but typically statistical math or other mathematical modeling (e.g., neural networks), computed very quickly and on a massive scale.
Large Language Models ≠ AI
The most visible variant of AI today is large language models. Many even think AI solely means these models. Even though media headlines often warn us about rogue AIs turned evil, reality looks different. AI works as it’s been trained with data it has learned from. If that data is the entire internet, it’s no wonder bad examples crop up. Hallucinations are a feature, not a bug, of large language models; while they may seem ill-intentioned, they’re a far cry from intentional human action.
So we have rapid statistical computation with which we can automate, speed up, streamline, or otherwise improve a host of human decision-making tasks. Ultimately, these are decisions that many people could make manually, if only we had unlimited time and resources. Sometimes you get interaction that feels human (such as chat interfaces for language models).
This is core technology, whose applications need a societal debate, for example AI ethics. And on top of this core, you can build endless application. Some may fuel dystopian visions of surveillance societies or autonomous weapons. Others may lead to better, cheaper, and faster disease diagnostics, new medicines, or more climate-friendly farming. Clearly, not all of this is asbestos; as with so many technologies, nothing is just black or white – everything comes in shades of gray.
P.S. Here’s a quick and handy way to check claims about AI: swap out the word ‘artificial intelligence’ for ‘the internet.’ If the claim still sounds reasonable, you might be onto something.