How mind-mimicking chatbots matter to next metaverse morphosis
Will the metaverse vision benefit from generative AI technology? Let's find out
By Hassan Baig.
The metaverse is a virtual or augmented reality experience that will introduce new layers of reality on top of our existing one. It's a vision Meta (formerly Facebook) has helped propel into the limelight — although our fascination with virtual reality and the metaverse dates back to the 20th century.
However, we have now entered the era of 'chatGPT', a chatbot that's human-like in its communication and has given us a glimpse into the power of such ‘generative AI’ apps. Generative AI can create original thoughts and ideas, meaning it has implications for all human endeavours, including the metaverse.
So will the metaverse vision benefit from such generative AI technology? The short answer is yes, it will hasten the emergence of the metaverse and enrich the metaverse experience once it goes mainstream. Let’s see how.
Venture capitalist Matthew Ball — one of the foremost global experts on the metaverse — recently shed light on the friction that besets the growth of the metaverse. Essentially slow advancement in battery technology, wireless power, optics, and computer processing has led to the metaverse dream getting delayed again and again.
For instance, computer processing is a bottleneck because the metaverse requires high-performance computing to generate the interactive and highly dynamic environments it needs. And that computing power will need to be miniaturised enough to sit on the bridge of your nose, ideally weighing no more than 40 grams — the weight of typical spectacles. That's a tall order. Probably the stuff of science fiction.
However, by leveraging generative AI, the metaverse can reduce its reliance on traditional, computation-intensive methods of displaying 3D content in real time. For example, generative AI models can create virtual environments, objects, and characters on the fly, at a fraction of the computation cost.
This will reduce the amount of data that needs to be stored and processed, freeing up computational resources while simultaneously making it possible to build more complex and immersive virtual environments.
Additionally, most of these AI models work perfectly offline. This means a small file in your device will eventually be able to ‘draw’ high-resolution virtual environments in real-time without even needing internet connectivity.
Note that this isn't a future prediction. That small offline-supported file is already here. It's called Stable Diffusion, and developers have already been using it to render 3D virtual environments in real-time since at least October last year. So powerful is this technology that several countries, including neighbouring India, Bangladesh and Nepal, have signed MOUs to adopt it nationwide.
All of this has happened in the span of the last six months, with Stable Diffusion becoming the fastest-growing project in the tech community (faster than Bitcoin and Ethereum). What the next six months will bring is anybody's guess.
While generative AI may solve the computation bottleneck, we will still need advances in battery technology, wireless power and optics to make the metaverse dream come true. Experts are currently divided on how fast those advancements will happen — pessimism is now on the rise among metaverse experts.
However, this will fundamentally change if AI becomes smarter than humans and helps us design innovative solutions we can't imagine by ourselves. But such posthumanism is outside the scope of this article since it's impossible to predict what happens in a world that nears such technological singularity.
However, one thing is clear: once the metaverse does emerge, AI will play a seminal role in populating it. Virtual beings with human-like cognition are already possible; startups in this space have been sprouting up since at least 2016. So it’s easy to extrapolate that the metaverse will be a hotbed to millions (if not more) of such beings, perhaps even outstripping the human population.
Moreover, most humans will also have AI representations of themselves (i.e. simulacrums), allowing each individual to be omnipresent in different shapes and forms, across modalities like audio, video, text, holograms, etc. These AIs will also persist beyond the underlying human's death, representing them forevermore. It may sound like science fiction, but it isn't.
In 2016, a woman famously built an AI to simulate her deceased best friend. The deceased person’s name was Roman Mazurenko, and you can find him by this name on the Apple app store to this day. Needless to say, occurrences like these will be the rule, not the exception, once AI becomes a commodity available to all.
As with all technological advancements, there's a rising fear in human society that AI will replace humans and joblessness will increase. However, let's not forget that the world has seen 3 industrial revolutions before the current one.
At each juncture, automation increased exponentially, yet no sizeable joblessness resulted (in the aggregate). In this day and age, employment rates are higher than (or at least on par with) those in centuries past.
So if history teaches us any lesson, it's that people will not lose jobs as much as the nature of jobs will change. Will there be cab drivers once self-driving cars come to the fore? Will there be logo designer jobs once AI starts producing precise logos at scale? Will content writing still exist as a standalone job? Some of these roles are already going away, but new ones like ‘AI prompt-engineer’ are arising as well. Overall we're in for an exciting, life-changing ride.
Hassan Baig is the CEO of an AI and Metaverse technology company. He is a serial entrepreneur who previously founded gaming, digital literacy, and social networking startups. He tweets @baigi