Supposedly, uncertainty is bad for business. The changes in our technology, in our methods and our very way of thinking have brought about a vulnerability that many of us long since bottled and put away.
Our way of doing things, moulded by experience over years, if not decades, is now being challenged. We aren’t ok, but we’re excited. Excited by what can be. In the areas of medical research, in logistics; in governing our finite resources. This is a time of change that was unpredictable just 5 years ago.
On resources: we are terrible at gauging how much we actually need. Especially creatives. YouTube today is a prime example. There are single creators trying to match the output of huge Hollywood teams. Designers and writers agree to deliver 12 hours of work in a hour. Then we see the headlines about burnout. Obviously.
The promise of AI hovers in greater output. Those same YouTubers for example, should now be able to, in fact, produce the work of an entire team. Make more, share more. It’s a self fulfilling cycle. The algorithm advises us to make more, to feed the algorithm.
Couple this with falling reproduction rates. Humans are making less humans but more robots. Robots that are capable of recursive self-learning that we have no thorough legislation to control. The laws that we do have can never keep up with the rate of advancement. Where AI leaps in a week, bureaucracy takes a year to respond.
These changes to our psychology and technology aren’t without changes to our biology and environment. The energy costs of AI continue to skyrocket – again a gross underestimation of the resources we really need.
Geothermal power is needed for projects like Stargate, where Texas tax negotiations give an 85% reduction in state taxes – with administration saying that 15% of billions is the optimist’s view.
So is uncertainty actually bad for business?
It seems to me that uncertainty is the best condition for business in 2025.
Our technology is a rudder under the sail sending us toward high agency functions. Navigating uncertainty through strategy, creativity and communication through content is the most potent space for thinkers.
We are compass builders and map creators.
I think that with the direction it’s all headed in, the argument for universal basic income is going to find further precedence. Think ahead to agentic LLMs or general AI being put into humanoid robots. If history is any indication, the technology will continue to get cheaper and access will continue to be democratized. Meaning: robots will be capable of learning and performing the physical tasks that we currently think are ‘safe’ from AI. A concrete example: if a robot is capable of fighting fires, then what is the moral stance on employing a human fire fighter? While the example is a high risk scenario, it will extrapolate across society to a new expectation: why use a human for a job that is better suited to a robot?
We are literally creating a new species, in full view of the world.
For the humans who can never outpace a robot, new provisions will have to be made.
Perhaps it is a naive view, but perhaps the concept of money will be reimagined by a super intelligence. Perhaps our concept and perspective of value will become so warped compared to what it is today, that our future society will be in stark contrast and far beyond anything conceivable today.
Cixin Liu gave an analogy: the intelligence of the locust is so far beneath the intelligence of the human. But the intelligence of the human will be so far beneath that of the super AI. Will this AI then ‘feel?’ And if it does, will it look down on us in the same way that we look down on the locust?
I think the inability to answer these questions is the uncertainty. But it’s the same uncertainty as not being able to work out the cure for cancer. If the technology can solve these problems in a realm that we cannot understand, but from which we can derive benefits, then what right can we give ourselves to regulate (and thereby limit) that?