Follow us on RSS

2022-09-23 22:16:10 By : Ms. Morgan Zhang

Interested in learning what's next for the gaming industry? Join gaming executives to discuss emerging parts of the industry this October at GamesBeat Summit Next. Register today.

The metaverse conjures plenty of imagery from virtual worlds like those envisioned in novels such as Snow Crash and Ready Player One. But the industrial metaverse may very well become a reality first.

That conclusion comes from a strong show of 150 enterprises that have partnered with Nvidia on the Omniverse, the simulation environment that is being used to build metaverses big and small. The industrial metaverse will consist of digital twins of things such as car factories.

BMW has built a digital twin in the virtual world so that it was able to perfect the design and build the real factory later. It is an example of many digital twins that big companies are building. Another is Deutsche Bahn, the big German railway company that is modeling more than 5,000 train stations in its rail system so that it can virtually monitor its real-world rail system. What is fascinating about this discussion — about whether the gaming or the enterprise metaverse will happen first — is that both are galvanizing each other.

We talk about such things on our Omniverse panel at the GTC fall event. The panel included Matthew Ball, CEO of Epyllion and author of The Metaverse book; Rev Lebaredian, vice president of the Omniverse and simulation team at Nvidia; Peggy Johnson, CEO of Magic Leap; Tony Himelgarn, CEO of Siemens Digital Industry Software; and Inga V. Bibra, head of IT for Mercedes-Benz research and development.

Join gaming leaders live this October 25-26 in San Francisco to examine the next big opportunities within the gaming industry.

Here’s an edited transcript of our panel.

VentureBeat: Hello, GTC. My name is Dean Takahashi. I’m the lead writer for GamesBeat at VentureBeat. I run our GamesBeat events. We have one coming up called MetaBeat on October 4, and another one called GamesBeat Summit Next (Use Dean50 code for 50% off) on October 25 and 26. At every one of these events, we’ve been talking about the metaverse. This conversation has been going on for a few years now. Our panel here is a very interesting one. We have some folks who’ve also been talking about this metaverse for quite some time. I’d like to have them introduce themselves and talk a bit about where they come from when it comes to talking about the metaverse. We’ll start with Matthew Ball, CEO of Epyllion, who has an outstanding book out called The Metaverse.

Matthew Ball: I’m an investor, author, and producer, primarily focused on the metaverse.

Rev Lebaredian: I lead the OmniVerse and simulation team here at Nvidia, focused on all this metaverse stuff, and more specifically the industrial metaverse.

Peggy Johnson: I’m the CEO of Magic Leap. We make a head-worn augmented reality device that allows you to smartly integrate digital content into the physical world. We’re not building the metaverse, but we work to provide a window into the metaverse.

Tony Hemmelgarn: I’m president and CEO of Siemens Digital Industry Software. We make software for designing products, manufacturing, all these types of things. Much of the work we’ve done over the years is in what we call the digital twin, where the real world represents the virtual, or vice versa. The metaverse is a big part of what we see as the evolution of that as we go forward.

Inga V. Bibra: I head IT for Mercedes-Benz research and development. I’m particularly interested in how we can apply metaverse in the industrial context, in particular on the engineering side. Engineering, product design, planning, production, all the way to the life cycle of the digital twins that come.

VentureBeat: Since we’ve talked about the metaverse and Omniverse at multiple GTCs now, I’d like to start with a progress report on where things are. How are metaverse projects and applications and developments progressing? Let’s start with Tony on that.

Hemmelgarn: We do a lot of work with digital twin. The digital twin is not a new concept. It’s been around for a long time. But the value of the digital twin is how close the virtual world can represent the physical world. If you can make decisions in confidence by knowing that your digital twin representation is comprehensive, then you can move a lot faster when customers who use our software design the products they’re working on.

What metaverse brings to us is the ability to make that more realistic. More photorealism. The work we do with Nvidia, for example, to make it instantaneously photorealistic and see exactly what’s going on. We’ve been working on a number of use cases that drive toward how you solve the problems customers are faced with. It’s everything from examples like continuously monitoring production or factory operations. You get notified of a production throughput or quality issue. We have multidisciplinary teams that come together in a photorealistic view to analyze an issue, identify the root cause, and simulate and optimize solutions.

“Simulate” is the key word here. This isn’t an animation. An animation is okay, but it’s not enough. We need to be able to simulate the physics behind it to say, “If I make a change, what happens?” We’re working on those kinds of use cases with our customers today. We’re very far along in showing the value of what those can be. Like anything in emerging technologies like this, we’re at the forefront, and we’ll go a long way going forward.

But there are also examples of doing it in product design. We talk about the example of designing a yacht for a customer. They want to decide on options. They want to see it in a photorealism mode. We can do that at the time of design to show them exactly what this will look like and make decisions in real time. We have a number of use cases we’re working through to drive that with our customers.

Bibra: That’s a good point to start for me as well. The metaverse is changing a lot of the way we’ll live in our personal lives, but also it’s a real paradigm shift in the way we will collaborate in the future. If you look at automotive and automotive engineering, we deal with many very complex components and systems that continuously need to be integrated and validated. The opportunity I see is that we will have this immersive, real-time environment where we can collaborate.

Imagine if you’re changing a component in the car as an engineer. You have your production planning colleague immediately seeing that change and being able to adjust parameters of production equipment and feed that back to the engineers. You get these closed loops with the physical simulation capabilities. Our vision is to remain in the virtual space as long as possible. This gives a huge opportunity in order to save hardware and costs, but also shorten our development cycles.

We’re currently still in the early stages. We’re looking at a few particular use cases, like a virtual drive, driving the car virtually and having a real experience of that. Having a production factory, a true walk around for our factories. We’re learning that way to see what is possible, and where we might also have stumbling blocks, in order to push the virtual phase of engineering.

Johnson: In some ways, Magic Leap is the original of the space. We’ve been around 12 years. Dean had a quote about how Magic Leap had been poked at for spending so much money, but maybe we didn’t spend enough money. Now, finally, there are others innovating in the field. We welcome that. It’s an exciting time to be in the metaverse. COVID-19 helped accelerate some of the focus here. But the optics themselves are pretty challenging, particularly in the field we’re in, in augmented reality. It’s hard to get that right, when you trick your eye into thinking digital content is in front of you. Finally, all of these things are coming together.

Matthew said in his book that it’s many technologies that have to come together to progress to a new era. That’s where we’re at with the metaverse. These things are starting to jell. We’re seeing useful use cases for the technology. Largely, for us, we’re focused on the enterprise metaverse first, because the devices are still a little bit big for consumer. But eventually, with further silicon integration, we’ll get down to that glasses format.

Lebaredian: In some ways, we’ve been building this metaverse thing continuously for a long time. At least that’s the way we look at it at Nvidia. All of the technologies and all the things we need to build are just starting to come together now, or that’s what it feels like. But we’ve been working on this thing for decades. Tony alluded to that. We’ve been talking about digital twins for a long time. Siemens has certainly been working on that, and so have many others. You need to reach a certain critical mass of things happening for it to really pop. I feel we’re at that moment right now.

One thing that surprised us–we started working on OmniVerse many years ago, four or five years. Originally, we imagined that the early adopters would come from smaller niche industries, from media and entertainment or visual effects. Eventually, we would expand into architecture, engineering and construction, ACE, and then after that get into manufacturing and more industrial use cases. We assumed that the manufacturing and industrial sector would be slower to adopt because they tend to be more conservative.

What we’ve found is that the opposite has happened. From the customers that are coming to us and the demand that we’re seeing, it’s primarily from the industrial sector. Something has changed here where companies that build things are realizing that the complexity of the things they build is so great, the only way they’ll be able to do this efficiently moving forward is by first simulating the things they build. They need digital twins. We need a way to iterate and design without having to do it in real life, in the physical world first.

This is a great moment. We were pleasantly surprised that companies such as Mercedes have realized this already. We’re at the very beginnings of it, but already there’s been a lot of progress made.

VentureBeat: I’d interject a couple of things here, too. It’s interesting that Omniverse started where it did in robotics, but now has evolved into the highest level of supercomputing as well. You guys have started talking about this Earth-2 project that’s going to involve Omniverse, and probably also lead to the creation of some kind of metaverse as well. That’s interesting. It’s also fascinating to see different companies like, say, McKinsey come out and say that they expect the metaverse to be a $5 trillion value as a phenomenon across many industries by 2030. This kind of progress and excitement continues to build around this idea. People haven’t discarded their science fiction dreams yet. In fact, they’re just becoming more real.

I’d like to dwell a bit more on the progress for digital twin projects and maybe add some more detail here. How are companies like Siemens, Magic Leap, Nvidia, and Mercedes building these digital twins? What are the technologies you’re using?

Hemmelgarn: I hit on this a moment ago, saying that the value we provide at Siemens with our software is our 3D design, all the capabilities we have with manufacturing simulation, computer analytics, whether it’s computational fluid dynamics, showing fluid flow. All these things are part of what we do within our software. Again, the digital twin is not new, but as Inga said, I want to stay virtual as long as I can before I go to the physical nature of that. You can only do that with a comprehensive digital twin.

For example, if we talk about the complexity versus the complexity of a product, this is why our software has been growing so rapidly in the development of digital twins with our customers. Products are extremely complex. An automobile has hundreds of thousands of requirements that go into it, or an airplane or whatever. How do you change one requirement without knowing how it affects everything else in a virtual way? If you can’t represent software, you can’t represent electronics, or the mechanical design, or better yet the manufacturing and automation and all the things that go into building that product, you really can’t simulate it. You can only simulate partially.

What we’ve been doing for many years is building digital twins with our customers to help them go a lot faster. Maybe as Peggy mentioned before about COVID-19, what we saw with a number of our customers is the ones that were able to move through COVID-19 much more rapidly than others were the ones that had more of a comprehensive digital twin. They couldn’t get into the factory. They couldn’t all get together to look at the product design. They had to be able to simulate that. That’s what our software has done for a long time. Our customers are very successful in that.

But you’re never finished with software like that. You always want to keep stretching toward, how do we make it more real? That’s the work we do with Nvidia. If we can get photorealism instantaneously, and I constantly see it–I talk about, in automotive, for years it was always, “Go to the cave. Go to the cave where you can see the photorealism of the car before it’s ever built.” Well, why do I have to go to a room? Why can’t everybody see it, all the time, in real time? That’s what I think about when I think about OmniVerse. Making it more extensive across the entire organization, more real time, more pervasive throughout the entire design process. That’s what we see when we think about a digital twin. That’s what we’ve been doing and where we expect to take it going forward.

Bibra: That’s absolutely right. We’ll never be finished with building the industrial metaverse. But you need to start somewhere. We start creating an ecosystem because it’s not just the metaverse or the digital twin. You need to bring data sources together. You need to bring simulation capabilities together. You need to bring a lot of context in there that you might not currently have, to make it photorealistic. You also need to build up, let’s say, the technologies that are necessary for that. Of course, we’re working with cloud technologies, with knowledge crafts, to enable the AI part of it. We need to deal with huge amounts of data, with APIs and other technologies.

The challenge we all face is bringing these various data sources together, and at the same time focusing on particular parts where we can experience what it’s like working in the metaverse. I see the potential in the internal product conversation as truly incredible. You won’t need to move. You can attend to a problem on the shop floor, and with the click of a button you might go talk to a supplier about the next component they’re developing. This will bring a lot of efficiency, but also speed. Our vision incorporates everything along the whole value chain, both internally and externally.

It’s also very important when we deal with the data. There will be customer data that we need to be very careful with in how we deal with that. We’re very conscious of data privacy. At the same time, we also need to make sure that we build the mindset to work in the virtual world. It’s a real paradigm shift that we have in front of us. We’re just starting to realize the potential. Building that digital trust, also, is one of the core aspects that’s part of our vision.

Johnson: As a manufacturer of a device that looks into the metaverse, our focus has been on the ecosystem of solutions that we can bring on top of the device. We’ve worked with all sorts of companies and platforms like OmniVerse, which has been wonderful, working with the team there. But largely, bringing these solutions to the device, so our end customers can have their pick of things. We have companies like Navis that does reality visualization in factory-based solutions. Another company called Tactile that does a lot of workflow processes inside of factories. And then end customers who’ve been able to leverage what those solutions do. There’s a small Midwestern manufacturing company called PDC Linear. They’ve used the Tactile solution to reduce their training by something like 80 percent. There’s real bottom-line dollars in using these technologies and solutions on top to bring people up to speed.

There’s just something about seeing something as a digital twin. It’s a bit more cognitively–you’re able to capture what’s going on in the machine. We’re just learning about all of the psychology of the help that these digital twins can offer employees, particularly new employees, when they first come up to speed on all those many parts in something like an automobile. A lot of excitement ahead in that ecosystem.

VentureBeat: This is GTC, so I suppose we have to talk about computer architecture in some way. Matthew, what sort of computer architecture or computing power do we need for the metaverse?

Ball: This is a fun place to recap a bunch of points here. You spoke earlier about how McKinsey estimates $5 trillion by the end of the decade for the value of the metaverse. That’s actually modest. Goldman Sachs, Morgan Stanley, KPMG estimate $13 to $16 trillion by the end of the decade. Of course, Jensen has said as much as half of world GDP based on this year alone. That would be $50 trillion. That’s dated in the future, so it could be 70 or 80 or more.

Ultimately, this is a question of allocation. What we’re really observing is the fact that the entire world, or much of it, will run through real-time simulation. That allows us to understand that this has been a progress or process for decades. What has happened is not a new interest, but it’s a new capability. We’ve been doing real-time simulation in 2D and 3D for decades, but the computing systems which supported that were limited in the complexity of what they could simulate. Or the complexity required such expensive computing devices and runtime that almost no one could access these. I like to joke that an explosion was useful for Space Invaders, but that degree of visual simulation or physics simulation had limited utility beyond that.

It’s clear that over the last five or 10 years we’ve hit a crossing point where the maturity of the systems, the deterministic physics of that simulation, and the availability of compute has allowed that technology to expand everywhere else. Tony talks about the complexity of car requirements. Rev is talking about Earth 2.0. What we’re really talking about is making the entire world legible to software, simulating it in that software, and doing so in real time. Doing that to the scale that we imagine, being able to support trillions of dollars of real estate and architectural projects, billions of dollars of in-flight infrastructure owned by the individual and enterprises, such as a vehicular fleet, and having them work in coordination rather than just individually isolated.

We don’t have the computing power for that today, certainly. We’ve seen Intel estimate a thousand factor increase in computing efficiency is required. Meta has said more than 500 times. But what’s important is to recognize that it’s a continuum. We’re slowly closing that gap. While there are certain use cases that we look toward and say, “If we achieve level X or deployment Y, we can do thing A or Z,” that skill set, that application is growing daily. That’s why we now see the application of simulation for infrastructure, whether it’s in national security or individual property, or healthcare or automotive.

Lebaredian: Inga and Tony touched on the value of staying virtual as long as possible. Going for as long as possible completely in the digital world. Where we’re going is essentially staying virtual forever, even after you build the thing in the real world. You’ve built your factory to create your Mercedes cars. There’s still a ton of value, if not the most value, in having that digital twin still exist alongside the real thing.

If you have that digital twin, and you have a way to link the two so that you can reflect the current state of your factory inside the digital version, you gain a lot of superpowers here that computation and software can give you. This is also where Siemens comes in very handy. They have this link between the virtual world and the real world through all the operational technology. It’s a massive amount of data being produced in real time from all the little computers and embedded systems running inside such a factory. Once you can reflect that in the virtual world, we have the ability to teleport. That’s what Inga touched on. Anyone, anywhere, can go inspect an issue inside the factory without actually physically having to go there. If you have a simulator that can predict the near future, things that are about to happen, then you can essentially time travel. You can go into the past by looking at stuff that was recorded, all that data recorded in the past, or go into the future. If you can compute that simulation really fast, then you can potentially explore multiple possible futures, change things around, and test it.

Doing all of this stuff is obviously super powerful. If you can do it, it will

unlock all these abilities. But the computational need is immense. There are some restrictions with the laws of physics here. We’re limited by the speed of light. Often you need to communicate information from the factory to the simulator to make a decision and bring it back. That’s limited by the speed of light, depending on where the computers that do that computation are. What we need to build up for this dream is a distributed, heterogeneous supercomputer that can handle immense amounts of data and do the computation where it’s needed, when it’s needed. A lot of the computation can be latency-insensitive. You can do it at a large distance in a data center far away. But a lot of it has to happen right there on the robot, or in the same facility at least.

We’re busy building a lot of these technologies. It’s not just about the chips and the computers. A lot of it has to do with a new kind of network, new types of networking that can move the immense amounts of 3D data, spatial data, and all the metadata that overlays on top of that at the right place and the right time, securely and safely.

VentureBeat: Peggy, what sort of computing power is required for the metaverse? But also, what computing architecture or platforms still need to be invented?

Johnson: I look at it in a couple of ways. First, I couldn’t agree more with Rev. What we learned with Magic Leap One was the needs were high for real-time rendering, turning data around quickly, making instant decisions. When we built Magic Leap One, the architecture, much of it wasn’t there. The operating systems didn’t have plugins for AR at the time. The previous team built an OS from the ground up. They built a factory to make the devices and the optics that were so challenging. But it did give us an advantage to have all of that in one facility. We ended up then going even further and building out the full stack with our OS and then a platform on top of that, and even solutions.

If we hadn’t done that, we might not have been able to realize an AR device. This was an area no one had ever built in before. We had to come up with a whole device without a lot of a road map. Having done that now, we’ve now switched over to Android. That opens up the dev community to us. That was a good move. We’re also an open platform. We want to integrate with as many platforms out there as possible. People use all sorts of different solutions. We have to be capable of doing that.

There are two areas of focus for us on compute. You need a lot of compute on the device, because you need to do some things with very low latency. More like a PC level chip on the device rather than a mobile phone chip. But then, off the device, we’re going to have a lot of leads to instantly recognize what a physical object is in front of us. That’s going to take the power of something like Cloud XR that Nvidia has. We can’t do all that on the device. To be able to tap into those sorts of things, it’s going to open up the opportunities of what the metaverse can be, to have that kind of compute at our fingertips.

VentureBeat: Where is this compute going to happen for the various experiences, like XR, digital twin, and persistent simulations?

Ball: It’s a fun question. To the extent of what Peggy’s saying, it’s important to identify that often these questions are asked in isolation — “What’s the computing power requirement?” — without recognizing the way all of these things pull on them. One of the primary problems with these devices is either the heat that they generate–we’re talking about wearables on the face. We’re talking about the device weight and what that means for wear on your neck. We’re talking about battery life. These are all in tension with the computational needs of the device. You can improve the efficiencies. You can potentially shrink the form factor, as well as the load on the device, which leaves more room for the battery or otherwise means the battery can last longer or generate less heat while it’s being used.

That’s why the right question is not necessarily compute power. It’s not even necessarily this question of where you place which activities, as Rev mentioned. It’s looking at all of these individual points while also tailoring them to the application. It’s right to start talking about these as industrial versus consumer use cases, or prosumer use cases. The question and the answer are always going to differ in this case.

When we talk about what’s happening in a factory, for example, a few different examples — first of all, these factories have access to more powerful local computing devices. They can be rendering work or performing calculations on a supercomputer that might be 100 or 1000 feet away. In addition, with the industrial application, a worker might be using a device with a local relay station, or perhaps wearing additional horsepower on their back, so to speak.

This is why we think of this as such a complex and hard to answer problem. We know that most of the individual contributors aren’t quite there, but it’s more about figuring out the puzzle pieces and the right optimization for the problem, for the user, rather than a single answer. But certainly, we can tell that some form of those three different locations is going to be necessary. The preponderance of data is going to be processed, managed, and rendered locally. Then we’ll have to use edge, and then much farther away data centers to support that work. That’s why we think about this–in some regards we’re going back to the old Sun Microsystems adage, that the network is the computer.

VentureBeat: Rev, what foundations have already been built, in your mind, for the metaverse?

Lebaredian: It first starts with building computers that are powerful enough to do these real-time simulations. That’s at the core of what Nvidia has always done. The GPUs that we build, first they were highly specialized just for rendering, which is a physics simulation. It’s the physics of how light interacts with matter. Over time, we’ve made that more and more accurate. Years later, we introduced programmability so that you could harness the massive data parallel capabilities of our GPUs to do other types of simulation. That’s when we entered high-performance computing and supercomputing to do physics simulation. In the last 10 years, the emergence of AI has exploded on our platforms, on our computing platforms. That introduced the automation of intelligence or of skills. We can start dreaming about creating algorithms that can do things that previously we had no way of programming.

The combination of this stuff, along with the new interconnects, networking capabilities that we’re building and designing, is offering the opportunity to solve problems that before we thought were completely impossible. Even what we have today, we’re not fully making use of it. There’s a lot of foundational technology available where we haven’t quite figured out how to integrate it altogether and write the right software on top of that. We’re exploring all of that. Omniverse, for us, is our way of doing that. We have all of these foundational technologies Nvidia has built. They just largely have been living on islands on their own. We’re bringing it together to show what’s possible. But we’re still at the beginning of this journey.

We’re at a special point in computing history where Moore’s Law is completely dead. The types of computers that we have to build to keep moving forward to get the amount of computation capability look very different from the ones we’ve been building in the previous decades. We can’t just rely on the new chips we produce a few years from now being many times faster than the chips we produce today. We have to redesign our software to run on heterogeneous computers. These computers have to be distributed. The applications we’re running don’t just run on one computer in one place anymore. They run on many different computers. Many are in the data centers and many are right on your body, like what Peggy was talking about.

We’ve made some great strides already, but we’re really at the beginning of this. The metaverse and the technologies behind it, this is the most complex, most difficult computer science challenge of all time. Simulating everything in the world and integrating the virtual versions with the real world. I can’t imagine anything more difficult. It will have an insatiable appetite for computing power, for data. We’re just starting. But we’re finally at a point where this stuff has come together enough so that we can see a path to where we want to go.

VentureBeat: How sad that Moore’s Law is dying just in time for the metaverse. Let’s chat a bit about 3D standards. We have the Khronos Group that recently came together to form the Metaverse Standards Forum. We have GLTF and USD. What are some perspectives you have on standards?

Johnson: I’m glad that even though this is still somewhat of an emerging technology, those standards are starting to jell. I always have to use analogies from the mobile phone industry because that’s where I grew up, but remember when you could only send a text message to someone in your network? If you were on Verizon, you could only send a text message to someone who had a Verizon phone. We have to get beyond that from the get-go. We’re working toward embracing the standards that are developing. USD [universal scene description], for one. There’s just so much value in having a cohesive industry out there. We can bring more value to the end users earlier.

One of our customers used to make 3D images of hearts for doctors to use during cardiac ablation, heart catheterization. The image had to be displayed on a 2D display. Your mind has to do the work of spinning it and trying to understand what that heart really looks like. But if we can get all these systems working together in one set of protocols–we have the company, SentiAR, putting up a live heart now in front of your eyes, a volumetric heart. The physician can weave the catheters through the heart with much more accuracy than when you see something on a 2D screen. Right away, there’s a use case that can improve the outcomes in the surgical theater.

But it’s going to take everyone working together, jumping on board with these early standards. We don’t want to have any walled gardens. We have to get beyond those days and have open platforms where we can all move together. The solutions are there, but we’re not going to realize them if we can’t work together on them.

Hemmelgarn: I’ve been doing this a long time. Particularly when it comes to 3D geometry and these types of things. There have been many standards. I always joke that the great thing about standards in these industries is there’s so many of them, right? But my view is, standards are really depending on use cases, where you go and what you do. For example, there’s a big difference between animation and simulation. Animation, I can use visual standards and see a lot of things, but when it comes to simulation, what happens when I’m doing that design review of a camera being designed with my software, and someone says, “Yeah, but can you cut a cross-section through that and show me what it looks like? Can you give me a precise measurement?” Suddenly, now visual standards aren’t good enough. I have to have some kind of geometric standard behind it.

For example, for years we have had a format called JT, which is almost the de facto standard in many industries like automotive and others. It’s kind of an abstraction of 3D models to allow you to visualize and do all these things. But people wanted a bit more in that. They wanted to be able to cut sections and do measurements, but still keep it high level, not as detailed as a CAD model might be. We used it for supply chain collaboration and those types of things.

From a software developer point of view with our customers, we will support the standards out there that are driven by our customers. Then it goes back to use cases. I fully support the idea of USD and all these standards. We have to be there. But there’s going to be more that come into what it means to be able to drive to the types of use cases. That’s where you’re going to end up. When you talk about a metaverse and the idea of all the things we’ve talked about today, it’s well beyond animation. It’s much more about what you do for simulation and the physics behind it to make decisions and not just visually see something as to what’s going on.

VentureBeat: Inga, how are industrial customers approaching new standards of adoption, especially given that there are very stringent environments for enterprises?

Bibra: At Benz, we’re known for pushing standards in the industry, particularly in the German Automotive Association. We have large working groups working on that, because it’s important to make sure that it’s widely spread. It’s essential for manufacturing. It’s always been, to be able to exchange data across many parties in a standardized format.

It’s also important to keep pushing that in a way that use case-based and open. The interoperability with software, with data in that ecosystem can be facilitated through knowing the same languages, as we also do when we talk to each other. We need to have the same language when we exchange data. It’s up to us for the industrial manufacturing companies, but also for companies like Siemens, as you rightly said, pushing that with us. We need software providers, data providers, computing providers to push exactly that, whether it be the JT standards or USD formats, ensuring that it remains open. For me, that’s something very essential in order to be successful.

Hemmelgarn: I would add that as long as we believe there are fundamental, and indeed extraordinary constraints on computation, among other resources like network infrastructure, that’s part and parcel with why those use case-specific standards are going to be required. Activities that are already pushing the limits of what a device network or user or company is capable of, it doesn’t have the latitude to have the wrong standards or excessive standards for the use case they’re deploying.

That sometimes leads us to believe that on top of those technical challenges, the fundamental rivalry that exists in the industry is going to preclude standardization. And yet, there’s a clear answer, especially in the global economy, and especially in trade and industry of that inevitable process of standardization, which need not be total. We’re talking in English, as Inga mentioned. We’re exchanging data. But of course there are many different global languages of business. Arguably, that diversifies rather than concentrates as China becomes stronger. And then of course we have metric as well as Imperial. Many markets mix both of them. We have USD and the euro, and increasingly the pound. And then of course we have arguably the most compatible infrastructure today, which is just the intermodal shipping container, which comes in three to four different sizes, which again aren’t universal based on the use case, but pretty widely deployed.

That’s where we ultimately end up, this mixture of myriad different standards, partial compatibility and incompatibility, conversion from time to time, but which nevertheless scales globally and ends up being the most effective way for everyone to work together.

VentureBeat: Rev, how do you see the speed of the development of the metaverse, whether the industrial metaverse or the metaverse in general? How fast do you see all the different parties moving here?

Lebaredian: I was surprised to see such great demand from the industrial world for these types of technologies. I assumed that it would take a lot longer for companies like Mercedes to realize how valuable real-time simulation and metaverse types of technologies, digital twins, could be for them. It’s happening a lot faster than I thought. We’re going to see, over the next five years, a very rapid transformation for all of the most advanced companies, or all companies. The ones that don’t do this, they’re going to be at a huge disadvantage. The companies that figure out how to use simulation, how to use interconnectedness in 3D worlds, the spatial overlay of the internet, to their advantage, are going to have superpowers. They’re going to blow past the companies that don’t. Those that have figured this out are pushing forward very quickly.

VentureBeat: Matthew, how do you see the speed of development and progress among the industrial applications versus the gaming and consumer applications of the metaverse that we hear a lot more about?

Ball: I’d reiterate that same point. This morning I was looking at the news. We saw four of the largest companies on earth rushing into this space in all the categories we’ve mentioned. Many of them are nevertheless moving toward digital twins for design rather than operation, and without a clear understanding of how they’re going to use them in 2025, 2026 and so forth. But there’s a renewed focus on making sure their pipeline and tech investment now are ready for what Rev mentioned, which is interoperation on a spatial or 3D internet. That’s most important.

If the answer was that the metaverse has arrived, to whichever extent we want to say that, and then everyone needs to figure out which tools they need and train for them, then deploy them, then get good at them, we’re talking about a much longer timeline. But much like Hollywood with virtual production, retail, automotive, engineering, even the financial system, they have understood that the future is not just in simulation. It’s real-time simulation in 3D with interconnection. They’re now deploying these solutions so that those standards are established, and as exchange of information begins to go mainstream, they’re ready for that.

That is probably one of the most important elements of this transition, which the average observer might underestimate. The plumbing and the electrical grid is being laid right now, even if we’re still a bit away from actually connecting all of those individual facilities and disjointed simulations.

VentureBeat: I think a definite sign of progress would be to start seeing a lot of reuse, things that have been created for one application being used for another. I wonder if we’re starting to see any of that. Are there any tools, platforms, or standards in common between these different categories of industrial, gaming, and consumer?

Lebaredian: In the gaming and entertainment world, that’s rapidly starting to happen. There are many large libraries and marketplaces you can go to now to get 3D assets that are more than just geometry, more than just triangles. You can go to Sketchfab, Shutterstock, Turbosquid, CG Trader, even the game engine marketplaces like the Unreal marketplace or Unity marketplace. All of them have either moved or are moving toward supporting USD as the primary means of interchange. With that, we’re starting to see construction of these virtual worlds, primarily in the entertainment space, from existing assets. You don’t have to create everything from scratch every time you create something.

On the industrial side that hasn’t quite happened yet, because representation of the types of data, the assets that you need on the industrial side, is far more complex. There’s no standardization there yet, or very little, for a lot of the complexity. But our expectation is that it will grow over time.

Ball: In particular, in the gaming space, we’ll see this proliferate pretty quickly. In some regard, we’re seeing that as we get years into this transition, we have this large consumer base, which has spent, at this point, tens of billions of dollars on virtual goods. They don’t have an infinite appetite to keep repurchasing these goods. That’s going to require the rest of the industry to start adapting integrations into those providers who already have the assets and the file formats, who manage the entitlements.

But if that’s true for gaming companies, for a Spider-Man outfit, you can imagine what’s going to happen for those who’ve invested truly billions into 3D scans of their environment, their objects, their infrastructure. As they look to innovations and integrations and productizing some of these for 3D space, that same business case is certainly going to exist.

Lebaredian: One interesting new thing that happened recently. Lowe’s, the home improvement retail giant here in the United States, they put out to the public a large set of assets from the products that they sell inside Lowe’s. They published it as USD. They have a lot more internally, but they put this out there for the research community to start playing with it. We’re going to see a lot more of that, where all of the goods that we buy are going to come with a digital twin of that item. You’ll be able to go onto the internet and grab the virtual version of that to put in your virtual world, the same way you can place it in the real world, in your home or elsewhere.

GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings.

Join gaming leaders live this October 25-26 in San Francisco to examine the next big opportunities within the gaming industry such as esports, user-generated content, influencers, and more.

Join gaming leaders live this October 25-26 in San Francisco to examine the next big opportunities within the gaming industry such as esports,user-generated content, influencers, and more.

© 2022 VentureBeat. All rights reserved.

We may collect cookies and other personal information from your interaction with our website. For more information on the categories of personal information we collect and the purposes we use them for, please view our Notice at Collection.