What is a metaverse? The term was coined by science fiction writer Neal Stephenson in his 1992 novel, Snow Crash, in which he envisioned lifelike avatars who met in realistic 3D buildings and other virtual reality environments.
Fast forward to 2022, the metaverse has jumped from sci-fi to reality, partly due to Mark Zuckerberg investing US$10 billion on Facebook Reality Labs to create hardware, software and content for the metaverse.
Zuckerberg’s version of the metaverse is a virtual world where one can visit via smart glasses or virtual reality headsets. These computer-generated worlds like a beach, forest or a video game will contain digital avatars and content that look cartoonish or “gamey”.
Or perhaps they should look super realistic 3D models and clones of cities, objects and people, says Dr Jon Lee, founder and chief executive officer of Vizzio Technologies.
He believes that the metaverse will be a realistic representation of a parallel virtual universe, containing lifelike 3D digital objects and people.
Armed with a doctorate in computer graphics from Cambridge University, the Singaporean has developed digital twin technologies comprising computer graphics, which he enhanced with artificial intelligence to provide the realistic 3D models of cities, images and people.
With 20 years of entrepreneurial experience in Glasgow, Scotland to Guangzhou, China, his software has found applications in the public sector, real estate and facilities management, and healthcare.
Vizzio was started two years ago after he returned from China. In the last two years, he has worked with a variety of companies including Surbana Jurong, SCDF, Capitaland and Schneider Electric.
In this month’s Q&A, he tells Techgoondu the challenges of forming up this new interactive 3D world and how pioneers like his company are fast creating this virtual reality.
The following exchange has been edited for brevity and length.
Q: From your perspective, what is the metaverse?
A: Metaverse is an online virtual world. I believe that the metaverse is the next evolution of the Web where content is moving from flat 2D images and text to immersive, interactive 3D models.
It will introduce new platforms and marketplaces to enable industries such as real estate, retail and healthcare to create new content and undertake transactions as well as allow people to interact with each other.
Currently, developers use 3D software to create digital worlds where people have to use virtual reality or augmented reality glasses to access them. I don’t think people will want to do this – it’s too inconvenient. I believe that a URL shared on the browser is the best place to distribute 3D content.
Q: But the metaverse is not new. There was Second Life in 2003.
A: Yes, that’s right. Second Life is an online multimedia platform, one of the earliest iteration of the metaverse. It allowed people to create avatars of themselves in the virtual worlds, and thus have a second life online.
But the output is usually polygonal 3D, sometimes cartoonish looking which is not realistic representation of any objects. People think it too cartoony and gamey to be serious.
I believe there is an alternative approach to create a 3D world of people, objects and cities that is dimensionally accurate and hyper photorealistic.
Q: When will the metaverse be realised?
A: We are in the early stages of the transition to a fully immersive hyper photorealistic 3D Web.
3D content is difficult to create on the Web. What is slowing down this process is the traditional way of 3D modelling which is a laborious process. For example, a 3D model of a city.
You need drones to take thousands of images of the city. Then the images are scanned into computers before the creative artists and graphics experts use specialised software to stitch the images together to give the 3D model of the city. It is an immensely slow process which can take years for a city.
We need a faster, better and cheaper way of doing this.
Q: How to do this?
A: Our approach starts from computer graphics, combining it with artificial intelligence, technologies that we developed ourselves to create the dimensionally accurate, hyper-photorealistic digital twins of cities, objects and people. We have already filed over 20 patents in Singapore and the USPTO (US Patent and Trademark Office) and we will file more.
Our technology starts with raw sensor data like photos, laser scans, satellite imagery and other types of media. The machine learning algorithm learning pipeline identifies every object in a city and quickly reconstructs digital twins at multiple levels of detail including textures of buildings, terrain and foliage.
For example, to create a 3D model of a city, we take satellite images which has accuracy of up to 30cm. Then our AI technology stitches them together to create digital twin of the city. The technology removes atmosphere cloud and shadows as well as optimises textures of buildings which is the most time consuming part for human modelers to re-create.
Using our technology we are able to create a 3D model of Singapore in two weeks.
Q: What are the applications of this technology?
A: Architects can use a 3D model of a new building in their plans. For example, they can place the building in an area to find out how it fits in, or how to position such that it gets best sunlight. Previously they would have to write special software and use supercomputing power to do this.
Property companies can use the technology for estate management and maintenance. Governments can use 3D modelling for urban planning. Retail companies can use 3D modelling to highlight to consumers the new products and those on promotion.
Q: How did you develop this technology?
A: I was with a tech consulting company in Glasgow in 2000s called Picsel Technologies in 1999. At that time, the handsets had tiny homescreens and the devices were powered by a simple microprocessor with low computing power. We saw the opportunity to develop 3D graphics for the handphones.
We could develop the software and sell it to the handset makers like Samsung, Motorola and Nokia. However, the microprocessor could only take simple graphics, so we had to come up with a creative way to develop 3D graphics.
We developed a software stack using assembly language which is a simple programming language, to write graphics routines which can be compressed and embedded in the phones. We sold the licence to Motorola in China which put it on 140 million handsets.
This became the foundation of our technology today. Later I wrote the AI algorithm to enhance the software.