Back in my second year of a Bachelor of Applied Science (Computing), some first-years at UCCQ (Australia) decided to start hacking through the old VAX systems and into UCLA (USA). It was 1990, and to some, it was just a bit of fun. Digital bravado in the early internet age. I’ll admit I thought it was exciting too. But I also knew it was bloody stupid. And sure enough, they were caught and excluded from their courses for misconduct.
I asked one of them, the cleverest of the lot, why he did it when he knew it was wrong.
His answer:
“I wanted to see if I could do it.”
Fair enough, I figured at the time. Still dumb, but at least honest.
But these days, I see it differently. That mindset of just wanting to see if you can hasn’t gone away. It’s now found in people with far more influence than a few cocky undergrads. And the problem is, they’re pushing boundaries without thinking through the consequences.
Pushing boundaries can be good. It expands our understanding of what’s possible and often drives innovation. But stepping forward without caution, preparation, or a plan for what comes after is reckless.
In my final year, 1991, I studied machine intelligence. It was fascinating. The theory behind how machines might learn and think was just beginning to take shape in meaningful ways. Even then, we couldn’t have foreseen how quickly the field would develop. And few people gave serious thought to the ethics.
Consider the recent effort to bring back the Dire Wolf. It’s been portrayed as a scientific breakthrough, a feat of genetic resurrection. But what habitat are we actually bringing them back into? The world has changed drastically in the 10,000 to 12,000 years since they last roamed. Their prey species are mostly extinct. The ecosystems they once belonged to no longer exist. Reintroducing them could place existing species at risk and undermine conservation efforts already underway.
And if their environment no longer exists, are we not being cruel? Bringing them back only to place them in a world they were never built to survive? One already gasping under the weight of pollution, deforestation, and climate change?
The same questions apply to artificial intelligence. Back in the 1960s, programs like ELIZA were created to simulate human conversation using pattern-matching. It was primitive, but it captured the public imagination. For decades, AI was mostly theoretical or limited to narrow tasks. But now, with the rise of generative models, it’s become widespread and commercially powerful.
AI started off mimicking creativity: writing text, composing music, generating images. Clever stuff. But now it’s edging into areas like medical diagnostics, legal advice, and finance. In manufacturing, so-called “dark factories” (fully automated production lines that run without human staff or even lighting) have already been built in parts of China and elsewhere. This isn’t science fiction. It’s reality.
And with it comes a wave of disruption. As automation replaces more jobs, entire industries face collapse. Millions of people could be displaced. Yet our economic systems aren’t equipped to support that kind of shift. Universal basic income remains mostly theoretical, while cost of living rises faster than wages.
So I have to ask, did the people building this tech stop to think about the fallout? Or were they just seeing if they could?
That question sticks with me.
Just because we can, should we?
As someone who once studied machine intelligence and now writes speculative fiction, I believe these questions are more important than ever. It isn’t anti-progress to pause and think. It’s common sense. Because when we stop asking why, and only ask how, we risk making irreversible decisions that affect not just the present… but the future we leave behind.
No comments:
Post a Comment