This is a special op-ed edition of the Singularity newsletter, written by Singularity Fellow and Expert Gary A. Bolles.
In the early 1980s, the dominant computing paradigm involved large, expensive mainframe computers, minicomputers, and workstations, each with tightly integrated microprocessors, operating systems, and applications.
Yet a dominant player, IBM, uncharacteristically innovated by developing a product line along a completely different path. The company designed a low-cost "personal computer" with an independently sourced microprocessor (Intel), operating system (Microsoft), and applications (from third-party vendors), a scheme that meant each component could be changed independently.
By taking such a dramatically alternative approach, IBM rapidly spawned a global market of “clone” PCs, and that single innovation launched what we now think of as the modern computing revolution.
The launch of ChatGPT in November 2022 catalyzed a tsunami of innovation in AI. Each software company followed a similar approach, developing generative AI products using large language models. This path was originally made possible by algorithms, known as transformers, that can process massive amounts of text information to detect language patterns. The result was a family of technologies that could respond to user queries with rapidly generated text, synthesizing those language patterns into responses driven by what the software determined to be the likeliest words to be used in a sentence.
Early thinking suggested that feeding larger and larger amounts of data into these models—a process called training—and making the algorithms themselves bigger would continue to yield performance and accuracy gains as the algorithms and datasets scaled. While this was initially true, recent experience shows that this approach has a number of challenges. These models require more and more powerful (and expensive) processors in larger and larger arrays housed in larger and larger data centers, demanding massive amounts of electrical power, which in turn has created a range of economic and societal challenges. This process guaranteed that only the biggest and most powerful companies could afford to continually innovate along this pathway.
Research also suggests that a substantial portion of the training data—text content created by humans, typically from countries in the Western and Northern Hemispheres—has a range of systemic human biases that are inextricable from that content, making it extremely difficult to guarantee trustworthy results. When you hoover up all the data from the open internet, you get human bias for free.
While the debate about the ramifications of these limitations continues to rage, neuromorphic computing suggests a dramatically different pathway for innovation in AI.
The Innovation Road Less Taken
Think of this as a catalyst for new thinking in your industry.
Those who have long training and experience in an arena often are rooted in a consistent mindset. For example, in 2013 Google ran a science competition for teens around the world. One young winner from Canada, Ann Makosinski, worried about her friend in the Philippines who didn't have electricity to study at night. Makosinksi wondered if it would be possible to create a flashlight that didn’t need expensive batteries. She ordered ceramic sheets known as Peltier tiles, and after many experiments found that two tiles arranged in concentric tubes could generate an electrical charge simply from the warmth of the human hand, powering LED lights.
Guided by conventional wisdom, an expert might not have thought to try this kind of novel approach or discounted it before testing.
Leveraging Alternative-Path Innovation
By innovating along a completely different path from existing solutions, your organization can take advantage of a variety of techniques and technologies. For example, neuromorphic computing suggests that you could leverage:
Biomimicry. Neuromorphic computing attempts to model certain characteristics of the human brain. While it operates more as metaphor than through physical recreation—silicon chips use a digital process, while the brain uses chemicals to stimulate and transfer electrical signals—nature shows us new insights that can literally spark new approaches.
Moonshot thinking. The human brain is one of the marvels of evolution, and the sheer complexity of its mechanics is incredibly daunting to attempt to replicate. Only moonshot thinking can catalyze the kind of energy needed to envision a computing paradigm that can leverage even a small portion of the brain’s information processing power.
Ecosystem thinking. Neural networks are just that: Networks of functions that are linked together. That’s how ecosystems work. By envisioning a range of independent but interconnected functions, new kinds of solutions can be envisioned that leverage the capabilities of collaborating organizations. Think of your organization as an orchestrator of its ecosystem and a range of new possibilities for innovation can emerge.
Gary A. Bolles is Singularity’s Global Fellow for Transformation, the author of The Next Rules of Work, and his 10 courses on LinkedIn have 1.6 million learners.