This is hilarious.
In an open letter by the Future of Life Institute, a nonprofit group that hopes to reduce global catastrophic and existential risks to humanity, 1,100 tech luminaries, including Apple co-founder Steve Wozniak and Tesla CEO Elon Musk, proclaimed that AI poses "profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs." Therefore, "powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable."
Like that's going to happen.
Look, Generative AI systems such as ChatGPT, Google Bard, and Meta LLaMA are here to stay. The AI genie is out of the bottle, and no one's putting it back in.
The public petition calls for "AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4." Why? Because — shades of The Terminator — advanced AI could lead to a "loss of control of our civilization."
AI tools, they wrote, present "profound risks to society and humanity" as we're "locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, predict or reliably control."
As a result, the Center for Artificial Intelligence and Digital Policy has asked the FTC to stop new GPT releases, and UNESCO has called on world governments to enact a global ethical AI framework.
That's not going to happen, either.
I mean, Microsoft, which bankrolls ChatGPT, laid off its entire AI ethics and society team; $10 billion for AI, not one red cent for ethics!
Am I concerned about the effects of AI on business? You bet. I'm not terribly worried about how it will affect me, even though writing's one of the areas where ChatGPT and the like are already having an effect.
The simple truth is that while AI is impressive, it's not all that it's cracked up to be.
The worrywarts say that governments should declare a moratorium if a pause can't be enacted quickly. We should only move forward when it's clear AI's "effects will be positive" and the risks "manageable." These decisions "must not be delegated to unelected tech leaders."
Clearly, this was written by people who didn't watch the clown show that was Congress's TikTok Hearings. There, such leaders as Rep. Richard Hudson (R-NC) had trouble understanding how Wi-Fi works with apps, and Rep. Buddy Carter (R-GA), was sure TikTok measures users' pupil dilation. (It doesn't.)
By and large, our elected leaders don't have a clue about technology. So for better or worse, we, the big tech companies, and we, the business users of generative AI, will be the ones calling the shots.
Yes, it's going to change everything. But we don't yet know how.
I was an internet expert when I wrote the first popular story describing the web. I got some things right, but I put the cart before the horse when I didn't realize how important search services would be.
Since no one knows how AI will evolve, it's important to be cautious about how you use it — not because you'll wind up as human batteries in the Matrix, but because you might invest in a technology that doesn't end up working out.
For example, right now, Microsoft and ChatGPT are clearly in the lead. So does that mean you should start paying ChatGPT consultants big bucks? Maybe.
But, back in the early internet days of the 1990s, Netscape was the company. In 2023, most people don't have a clue who that company was.
Maybe Google Bard will catch up. Maybe some company we've never heard of will develop a large language model (LLM) that blows the doors off today's AI chatbots.
We'll find out soon enough.
No one is going to stop developing AI just because experts are worried. If they were to try, they know darn well their competitors would likely keep going.
In real life, when people play out the Prisoner's dilemma, they almost always do what benefits them the most. And in this case, that means continuing to work on AI.
That also means if all you ever do is come up with questions to ask an AI chatbot, you'll keep working with AI as well.
And let's say someone does unveil the next AI revolution in the next few months. It could happen.
Are you going to wait for the ethical issues to be resolved, or will you use them to your best advantage?
I know what I'd do. Oh, and I know what ChatGPT-4 would do because I asked it: "A six-month stoppage in one region or for specific researchers could give other regions or competitors the opportunity to gain a competitive edge in AI development."
Need I say more?