Industry Veteran Tim Bates Provides Pragmatic Tech Perspectives to Cap off 2023
Tim Bates reflects on 2023 - the hype and reality of AI, lessons from his time at GM and Lenovo, the future of cars with software, standards needed for the open metaverse, and why computing accessibility matters for creating opportunity.
Today on building the Open Metaverse.
AI didn't start at OpenAI. It didn't start in a corporation. It started in the open source world, and that means you, me, anybody can pick up open-source AI source code, compile it, and create an AI.
Welcome to Building the Open Metaverse. Where technology experts discuss how the community is building the open metaverse together, hosted by Patrick Cozzi and Marc Petit.
Hello and welcome metaverse builders, dreamers, and pioneers. You are listening to Building the Open Metaverse season five, the podcast that is your portal into virtual world and spatial computing. My name is Marc Petit. And my co-host is Patrick Cozzi. Patrick, how are you?
Hey Marc. I'm recovering from the Cesium holiday party, so I'm doing great.
We'll be mindful of that, Patrick.
As you know, this podcast brings you the people and the projects that are at the leading edge of building the immersive internet of the future, the open and fair metaverse for all.
Today we are wrapping up season five, and we thought we'd look back at the year.
We have a special guest joining us on that mission. Tim Bates was a technical fellow at General Motors and CTO at Lenovo. He also writes a lot about AI and the metaverse on social media, and he's the perfect guest to look back on 2023.
Tim, welcome to the show.
Thank you. Thank you, Pat. Thank you, Marc, for having me today. It has been a crazy year, but let's go.
Yeah, let's talk about the crazy year that has been. But as you know, the tradition on this podcast is we ask you to start by describing your journey to the metaverse.
My journey actually started in the metaverse with me actually almost getting fired, believe it or not. General Motors back in 2009 when I decided to switch over from security to switch over to the metaverse or what we call digital twin. It was one of those things that I had an idea of. We're spending lots of money in asset creation, moving things from CAD files to asset files for media, for different media reasons within the company as well as the external for marketing.
That's when I noticed, working with a couple of people from Unreal, that the game industry was using the same exact type of tools, the same capabilities, or converting the same type of files. That's where me and Unreal started talking about how can we use the tools within General Motors, and that was the beginning of the metaverse for myself, looking at how we could tessellate graphics a lot faster, and that turned into a lot of the products that we see today.
Tim, we have to give credit where credit is due because when I started at Epic in 15 and 16, you were already a champion, almost, and a believer that a game engine could be used to support all those workflows in the manufacturing space.
That was crazy to have that championing and that kind of support. And again, you believed it before me, so that's pretty incredible.
Oh, thank you. It's one of those things that a lot of people think in my industry, as far as technology is always vertical, and I look at everything as horizontal. That makes it a lot easier for me to look at things and try to be from an innovation standpoint.
In some cases, there's a lot of innovation in other industries that can actually innovate. Most people don't realize back when I started in technology, in each industry, technology was probably 70% specific to an industry. As we move into 2023 and 2024 and go further into the future, technology's really horizontal. There's not really a uniqueness, more than 10 to 20% uniqueness in each industry where technology is that unique. Technology is agnostic across the board and if people started looking at it that way, we'd probably see a heck of a lot more innovation.
After GM, you went to Lenovo as a CTO. When you look back at those 18 years at GM and the work you did at Lenovo, is there an accomplishment that you look back and you're the most proud of?
I think at GM, the biggest accomplishment I think I had there is going to be weird to most people. You look at a 100-year company like GM, and you look at all the companies around the world, they're very stodgy in the way that they run. They're usually again, back to that vertical thinking. We use vertical thinking in order to accelerate, and that's something that we've done in the past. One of the things I was able to do at GM was to create a team of immersive specialists, not only within the IT department but across the business environment itself.
From design, engineering, manufacturing, sales, and marketing, even down to the vehicle ecosystem of the next-gen autonomous vehicles, we were able to create a team that basically worked together as one and break down those basically proverbial silos that we've always had within GM and we were able to accelerate.
We noticed that when we hit COVID we're able to use the same technology in order to speed up the production or the development and design of the Hummer as well as the LYRIQ. So, I say that was probably my biggest accomplishment.
I thought at first when I started it was like, "Hey, it's going to be Unreal and enterprise," but that was a good big accomplishment I think. But a bigger accomplishment is when you can actually change the narrative on how companies or how people think.
Kudos to GM. We don't look at GM as an innovation company, but it looks like there was a lot going on there.
It's one of the things that I love about GM is that there's a lot going on internally; they don't talk about a lot of stuff externally, but there is a lot of stuff going on internally. The one thing I wish and hope that keeps moving is innovation as they move into the software-defined vehicle. It is a big paradigm shift for any automotive company, especially here in Michigan where we've been focused on mobility or manufacturing and labor.
Now those things are actually transforming into more of we have to think with our brains and not the muscles and our arms in order to build these things. It's more software innovation that these companies have to transform over to. It's a big difference. Building a car is not the same thing as building software for individuals.
Tim, we wanted to first thank you for your service in the Marines. Then we wanted to ask about lessons learned in the Marines that you've applied to your career today.
I think one of the lessons I learned from the Marines early on is the model of adapting and overcoming. You can see that even within GM. GM has their own rules. They've been around for a long time, but in my mind, it's always about how you adapt to that organization. That's learning about the organization through understanding their business processes, their people, the organization itself, and then basically applying those new capabilities to it. In the Marine Corps, it's like when they tell us to go charge a hill, it's no questions asked. It's go get that hill.
As you're going to get that hill, you may be facing bullets, all types of things coming at you, but you still have to get that heel because it's a strategic piece that we have to take for that particular movement or within that skirmish.
In this case, it's the same being in real life. You have to be able to adapt, you have to be very agile in the way you approach things. I think that's one of the biggest lessons I learned from the Marine Corps is being agile and being able to adapt to different situations and organizations and even people and cultures for that standpoint.
On top of that service and your career at GM and at Lenovo, you are doing a lot of community educational activities. You're currently coaching as part of the US Cyber Games. Can you speak to your engagement in Michigan and what you're doing there and trying to achieve there?
One of the things I have a thing in my own mind, itself is the educational system failed the 80% across the globe, if that makes any sense. It's not just Michigan; it's the 20% of us that can go to school and fit the preverbial mode of what we've created over thousands of years of training each other. It doesn't fit for everyone else and I call that the 80%. The 80% of the people that are out there, they work at McDonald's, they sweep floors, they're janitors, they're the ones that want to move up and make as much money as we make and have a career, but they've been pigeonholed, which is technology is not for everybody.
We put this big wall up that says, "Oh, in order to be in technology, you have to be smart in math, you have to be smart in all these different things."
Over the last 20 years or 10 years, and I'd say Epic is one of the biggest things that has made this happen is that low-code, no-code type of development platform. We chose Unreal within GM because our engineers were not developers, they were engineers. In order to put a programming environment in front of them, it had to be something very simple, something that they could learn and very intuitive, and blueprints are just that.
That low-code, no-code blueprint environment is what I say I want to bring to the environment, to the rest of the world is you don't have to be a super-duper smart technical whiz kid in order to write some of these applications. You just have to have an imagination and the tenacity to be able to learn a very simple interface and then you can accomplish a lot of different things.
We've seen that with simple games like Goat Simulator. This kid was from Ukraine, his parents weren't computer people, and he had no computer background. He learned from YouTube to create a game, and now it's a multimillion-dollar IP out there on multiple different platforms.
I believe there can be a lot of different people within the world today, especially in the community, underserved communities, who don't understand and who don't even believe that they can actually make these applications. That's what my goal is to bring that mentality or open up the world or open up the doors or the windows if you call that, to the rest of the world to understand, hey, you can be part of the technology industry if you really want to.
We aim towards having a creator economy, and I read a statistic that the creation platforms have distributed 1.5 billion dollars in 2023 between Roblox, Fortnite, and a bunch of others. You really believe that's an opportunity for an underserved community.
We had Yat Siu on the show from Animoca, and he also said, "The American dream is alive in the East." He said, "Countries like Malaysia or in Southeast Asia see an equal opportunity to participate in the creator economy." Do you think that could be a vector?
I think it's a big vector. I think it's one of those things that in the American economy, we are always thinking about manufacturing a component. And what most people are thinking about is those components can't do anything without software. So it really is the creator economy that's going to drive everything going forward. Hardware has become very agnostic.
Even when I was at Lenovo, and I was looking at Lenovo comparing it to HP, Dell, and all the other companies out there, I was like, "Wait a minute." Since the 80s, in the 80s, it was really a competition between all these companies because they had something separate or different or were distinctive to be able to offer.
But today in 2023, 24, a computer is a computer. Most of our stuff is in the cloud, and if we want to bring it from the cloud, it's still focused on two things, A CPU and A GPU. Not really the aesthetics of what it looks like or anything else. IBM, Intel, AMD, and even Nvidia have done a great job. It's pushing us forward, but now we've got all this tech and we haven't even utilized 70% of that tech yet. We still have a long way before we max out what we have today.
A colleague was asking me the other day about quantum computing, and they were like, "Well, when are we going to have quantum computing?" I'm like, "When we figure out a good use for it." We have not really taken what we have today and pushed it to the limit. We have GPUs from NVIDIA that are still not even at 60%, 70% capacity, and we're trying to push that with AI, but we're still not there yet.
Quantum computing is not going to happen until we find something that we can actually use it for. I don't know if I had a million dollars or a billion dollars, I wouldn't be investing in research in something that is not going to be useful until we get used to what we already have today.
Interesting, you mentioned quantum computing; as somebody told me recently, "Do you realize that if quantum encryption delivers on its promise, none of the encryption technology that we have holds so suddenly? Everything is out in the open."
Well, there are a lot of organizations right now, especially MIT, some things going over at Stanford where they're actually working on new encryption algorithms that are supposed to be quantum-safe or quantum-proof, but again, we won't know if they're quantum proof until we actually get a quantum computing to test them against. But it's the same problem we had 20 years ago with Diffie-Hellman and other type of encryptions, same problem. It was like, "Hey, before we know it, we're going to be able to crack this." We still haven't cracked it. We've cracked some of it, but it still takes hours and sometimes weeks in order to do it.
But because compute is so cheap, that's the real problem. Quantum computing, when we ever get it, being able to crack passwords is going to be interesting, but the thing is, how many people are going to actually be able to afford a quantum computer itself?
I think even when we say it's a risk, it's probably one of those risks that it's like, "Yeah, if you're an Elon Musk and you got one in your basement, okay, we got a problem." But if you're just one of us, we're not going to have quantum computer in our basement in order to go cracking. Neither are the hackers out there either.
Tim, given all of the compute that's out there today and things like Unreal, 3D graphics are just brilliant today. Looking at your experience with GM's graphics tech, where do you think the biggest enhancements need to be to get true-to-life virtual environments and simulations?
One of the things that we want to work on is probably integrating more of the senses that we have from a human standpoint. Graphics is one thing, but graphics isn't the only sense that we have. We have more than our eyes. We have our nose, we have our smell, we have taste, we have ears. All these things that are supposed to be emulated from a human-centric standpoint are what we need as we move forward into the metaverse if we expect the metaverse to work, if that makes any sense.
For someone to be able to go inside of a virtual environment, they're not going to want to sit and just stand in one place. It's just like you have the organizations that are working on movement within a virtual environment where we can move limitless in that environment.
I guess I would just compare it back to Star Trek's hologram or holodeck, that is the perfect metaverse ecosystem or infrastructure that we would probably be looking for if we really wanted to move into the environment that we wanted to look like as far as somewhere where we're going to really work in and play in or whatever else. That's what we're going to be looking for and we're getting there, but now we're not close enough.
At GM, you believe very early in real-time 3D, we mentioned that, and you made a comment about the Hummer. The Hummer Electric being the first vehicle with a game engine in the dashboard, you had that vision, and you worked very hard to make this happen. It was not obvious that would happen. You mentioned software-defined vehicles, so can you give us a little bit of a definition of what they are?
I think it's self-explaining, but I would like to know if you could summarize what are the big challenges for car manufacturers in going all in on software.
If you look at a software-defined vehicle, it's most people would think they've heard some of the stories. Oh, there's software that we're going to enable heating or we're going to enable a heating seat or heating steering wheel, or you're going to be able to turn on music or not turn on music based on software.
The software-defined vehicle is really, to me, how we define the ecosystem within that vehicle. Software in the vehicle has always been there. We have the CAN bus that runs the vehicle, and that's pretty much a steady-state ecosystem. What's going to change in that vehicle is how we experience the experience that we have when we're sitting in that vehicle. It's no different than think 20 years ago when we first got on the airplane. We didn't have TVs on the airplane, we didn't have movies. We had a seat and we rode. We might've listened to music.
And then, over time, we started getting more things that we could interact with on that plane. We got a TV screen. We can watch as we fly. We can watch the flight trip going from the East Coast to the West Coast. We can get some news, we can get CNN. All those types of things are now within a flight within airlines or some of them. I can only speak to Delta because that's the one I usually ride.
But when you get to a vehicle, we notice some of those changes happening with Tesla. They put in a video game. Why would you want a video game in your car? Most people who don't have EVs are like, "That's useless." But at the time when we started charging EVs, it takes about 45 minutes to charge an EV. What do you do for 45 minutes while you're sitting at a charging station that's not a gas station, that's in the middle of nowhere plugged into something and you're just like, "Hmm, okay, well yeah, I want to play Cyberpunk." That's a great thing to do.
When you think about that going forward, it's like, okay, what do you do when you're on long trips?
What do you do when you're in a car that you don't even drive, or you don't own, and you're sitting with other people?
That comes back to again, that ecosystem of “Do I play a game? Do I watch a movie? Do I get information on what's going on around me in the different cities that I'm passing? Software defined as, is it more connected with the world that I have to communicate with as I drive.
If you've ever driven from New York to Chicago, you go through four to five different tolls owned and ran by three different, four different states, and within those states, multiple different companies. That's a nightmare, when you're like, "Oh, I got to have $75 to $200 in my pocket in cash to pay all these tolls." Over time, we put these little pucks that are in cars.
Have you ever seen a driving salesperson? They have five or six of those pucks that pay their tolls. Well, that ecosystem gets integrated into the cars. You've heard GM create the GM protocol now, that they're trying to get the other industry players to play around in. That's one of those things that you see coming together in that software defined vehicles, which is now, instead of having spare different networks and protocols, one protocol that connects all these different ecosystems together.
Think about when you are here in Detroit, the metro area; I'll say to go from Detroit to Southfield to Royal Oak. There are three different parking meter companies that I have to deal with.
That's another thing that gets tied into that software-defined vehicle at a later date, bringing it to more of a holistic or horizontal or basic environment that everybody can work in, if that makes any sense.
I think the definition of software to vehicle is a vehicle that integrates with your life, integrates with you the way you want to operate, not just in the car, but even in your home. What about Alexa, who's now crossed from the house over the last 10 years into our cars? Again, creating an ecosystem that's coherent with the user if you ask me.
Tim, earlier, you mentioned digital twins, and we want to talk a little bit about the industrial metaverse.
Looking at GM, how far have they gone deploying digital twins and what impact do you think it's having?
They've gone a long way. We started with the plants and some of our digital twinning of the manufacturing process before I left. It had a significant impact.
One I can talk about that happened way before I left was the C8. You guys were working with us on this configurator back then, and we had a great launch on the configurator for the C8, but what we did learn during that same scenario, we started building the cars months later and the manufacturing plant was unable, we were unable to build the convertible, so we had to stop the plant to the tune of millions of loss.
From that standpoint, what we learned was that in the past, we were pre-assembling these vehicles and just a percentage of them because a car normally doesn't change over a five-year period. Once a vehicle is locked, there's usually one standard bill, and then there are little add-ons to it, but not a lot.
With the Corvette, that scenario happened where a lot of things aren't changing; the body is changing, but the core isn't. But some things did change around how the convertible was assembled, the convertible top, and in that case, when we did our testing, we weren't able to test that because we figured, oh, it's like the old convertible. We're not going to have a problem.
Well, we did have a problem, and in this case, that went back to the idea of instead of having eight workstations testing 10% to 20% of a vehicle, why not test the entire 100% of that vehicle in virtual reality? This is one of the projects that we were working on in manufacturing before I left, which was to digitally build every vehicle from that standpoint in the metaverse before we actually got to the plant and even go through the manufacturing processes, so we didn't run into it.
And from that standpoint, that saves millions, if that makes any sense. Downtime of things that happen down the path also helps us with the iterative design process where if we find a problem right there, we can actually change it. That's one of the things that digital twin actually helped way before that assembly happens.
It happens way back when we do the design. A designer's able to sketch out something in Gravity Sketch or some other tool, and then throw it down the path to the engineering side to say, "Hey, does this work?" Most people don't realize that the designers are not the engineers when it comes to our cars. Designers are people who are very creative, and they go out and make some of the cars that they create. Their designs are unbelievable, and I wish I could drive them.
But then, when it comes to real engineering and the specs, and all the standards and rules that we have to make sure that we adhere to in order to put a car on the road or put a human or somebody behind the wheel, there's a whole bunch of other things that have to come into place and that's where basically we learn with digital twin how to not make those mistakes. Those heartbreaking conversations are had way before we get to the physical creation of anything because it can be done digitally.
Great. So you're familiar with Nvidia and their push to turn USD into a standard for digital twins and make it a foundational representation in the manufacturing space.
Do you think there is appetite for this and can they be successful?
It all depends on the automotive industry, or, I'd say, the engineering industry. I'll just call it that because you got Kia, you have Siemens, all these companies that have been around for 100 years practically, and they haven't standardized on those tools. Now can they? Probably. But anyway, we were trying to use Adobe's format before we even got to USD. Adobe's format is probably more sophisticated and more geared toward the data and the information that we need in a file type.
USD is light, but they've made a lot of different modifications in order to bring it up to par. But in order for USD to be accepted, it's going to have to be accepted by all those companies first. In order for it to make a big difference in, say, the automotive industry. Even from that standpoint, the aerospace industry, Boeing, uses the same thing that GM uses. GM uses the same thing that Disney uses. Disney uses the same thing that some of the movie organizations use.
It's just a matter of if they can get it there. USD can actually have all the elements that we need in order to use it within the different formats. It used to not have the animation capabilities, but I believe they actually were able to pull in some of the animation information. Some of the information from a GM standpoint, there's more parametric type data that's not in USD. And USD didn't have the capability of carrying that for an automotive engineer.
If they can solve that problem with an attachment something, then it probably will make it. But until they get all parties onboard and all parties working on what all parties need, then we'll probably get there. It was driving me nuts when the metaverse or the immersive industry coined they wanted to go with glTF, and that was it.
And they were like, "Oh, everyone's going to use glTF." And I'm like, "Where?" It's great for a headset and virtual reality, but it's useless when it comes to real engineering-type work. But that industry doesn't understand that because they're only worried about one thing: very lightweight graphics that are going to whizz and dazzle everybody in a virtual environment. And it's like, no, that's not what Boeing wants. That's not what GM or any engineering company wants.
They need more realistic metrics and data that go along with that data itself. It's not just about pretty pictures. And so I think that's another thing that we have to worry about, but I think USD has gotten a lot closer than all the other extensions that we have out there.
Tim, maybe extending this conversation here, this interoperability is just so key to the open metaverse and we have these siloed virtual worlds. What other standards do you think we're going to need to connect these siloed virtual worlds?
Well, I like to bring up OpenXR. I think they were on the right path. Since I left GM, I have been on the periphery of immersive for the last couple of years. It's like it doesn't sound like they're moving forward anymore. I'm not sure exactly what happened there, but it looks like they had a great idea. I won't say it looks like it. They did have a great idea on how to bring all of the that we talked about earlier as far as what makes up an immersive environment. It's sound, it's visual, it's spatial, it's touch, it's taste, it's everything.
Even though it may not be available today, it's everything that we're going to want in the future. The more we start looking at organizations or people start looking at how do we integrate that together as a team, then we will get there. If we don't, the interoperability is going to continue to be exactly what it is today.
I'll pick on one company that just popped up recently, Humane. You guys have probably heard of them. They created a little button that goes on your lapel, and it's supposed to be the AI differentiator for us, but yet it's proverbially another garden wall. You put on this pin. I got to go get another mobile phone number. Who wants multiple phone numbers? Oh, on top of that, I can't use my iTunes or my Prime Music or my Google Music. I got to buy another music service to use music through their system.
That's not interoperability; that's, oh, I'm trying to pull you away from Apple. I'm trying to pull you away from Google or whatever else. I'm sorry, you guys, just like me, probably have hundreds of thousands invested in Apple or Google as your main ecosystem or Amazon, one of those three.
They've been around for so long. It's like, "Why do I need another music environment?" And this company comes along with a great piece of nice tool or tech, I'd love to have it, but I'm like, "Well, I have to think about do I want that other phone or do I need another disparate garden wall in my group of garden walls that I'm already trying to manage."
I think in the metaverse, if we can get to that point with our tools, it will be successful. The thing is, too many people are trying to grab for it all, not including everybody else. That's one of the things I like about Unreal when it comes to the open-source part. It gives people on the outside the ability to say, "You know what? You didn't think about this, but I'm going to make it anyway and I'm going to put it and make it integrate with what you have so everybody else can use it."
That's interoperability in my mind. When you have a system that's open enough for people to integrate into it and not open enough where you got people like, "Well, I want to, I don't." That's what you have with Apple and Google and sometimes even with Amazon. There's a cost or barrier to entry in order to integrate with them, and that scares a lot of founders or inventors in the opposite direction.
I think OpenXR is a good call out because I think it has a very, very positive impact on the industry and has been supported by all the big manufacturers. It's extensible to support the innovation that you mentioned in terms of sensors, and it allows people to enter the market and access content.
It's a really good case where it's an API. In that case, standardization can help foster innovation, foster competition, and create better results. We are less worried about buying a VR headset today because I know I can access all the content.
That was a problem early. It's like, "Which headset do I get? Oh, it's content here. There's no content there. Buy that one. I'm on a desert island." That's why I'm hoping it continues to push forward. Steam upset me with SteamVR when it was supposed to be our open standard when we first started, and then they decided, "Oh, we're going to close the garden, and we're going to try the same trick." And you see what happened. It didn't work that well, and that's where we ended up with OpenXR.
I think if everybody got on the OpenXR type of mentality and the framework that they've created and how to keep pushing it, we'll get there, but we're not going to get there with everybody going in multiple different directions.
I know everybody wants to make money, but at the end of the day, we have to think about, we really want to think about the problems and the things that we broke when we created Web 2 or the internet back in the early 90s, and it was the same thing.
We didn't have enough knowledge not to do what we did, but now we have enough knowledge not to do it again, and we have to use that knowledge from the past to say, "You know what? OpenXR or something like OpenXR is going to prevent us from creating a big mess that we had that we're walking in today."
You got to come to work with us at the Metaverse Standard Forum because that's really, that's our thesis; as we prepare the next version of the internet, there is a disruption that we can use to rebuild the foundation on something that's much more open. From a data perspective with 3D asset interoperability, but also from an identity perspective. That's almost our daily fight with all the guys. Well, it's 2800 companies, so it's a big army, but trying to fight for that.
Switching gears a little bit because I know, and I'm reading everything you write about AI because I think you will have that interesting pragmatic perspective. It's not unfair to say that 2023 was the year of AI.
Tell us, sorting out the hype from the reality is crucial for everybody. What should we look at as technologies or apprentice technologists to judge all of those tools, those generative AI solutions that we see ahead of us?
I look at AI like immersive was a few years ago. It's for everyone, but it's going to be distinctively different for everyone. You've seen some of the things that are happening today. OpenAI started with, oh, we have this large database that’s inferenced by anybody and everybody around the world.
Well, we learned from that particular point that it's not doable from a biased standpoint; it's just too much information. Across the globe, what OpenAI wasn't thinking about if you asked me was we're all different in different cultures, different laws, different views, different everything. When you put all that into one big bucket, it's nothing but I'd say it's a nuclear bomb waiting to happen for the right person or wrong person to query data and get the wrong answer out and that's what we're seeing. We're seeing a lot of garbage in garbage out.
For AI to continue to work, you see what Google is doing. Google just recently, and they've been doing this all along, they're creating multiple different versions of AI.
You have an enterprise AI that more or less encompasses a lot of data from a lot of different places but is very distinct. But then you have a mid-size, which is more, I call enterprise corporate data, which is something that's smaller to a company.
Say if Unreal wanted to have an AI, they wouldn't go to OpenAI, they would go to a mid-size, build that based on their collective knowledge within Unreal, and that would help them. Then you have the individual users like us. Some people like Siri, which is not really an AI; it's just a bot. Some people like Alexa and they want to see those things expand to what they see GPT doing today.
And so that's that personal AI that's going to be there, and Google is working on that where their newest version of Gemini works on their cell phone, which is most of everyone's compute power today, which is a cell phone. We can't expect everybody to connect to the cloud and say, "Hey, I'm going to talk to the cloud." They're going to want to talk to their personal AI. They're going to want to talk to something that's more personally driven towards them, their beliefs, their knowledge, the things that they're working on.
For AI, in order for it to evolve, we have to start looking at it from a human-centric standpoint, not from just a data-centric standpoint. AI will be great as long as it's human knowledge that is inferenced into it, not just regular knowledge. You've probably seen that with some of the AIs that are out there today, you get a lot of book knowledge.
This again goes back to where I was saying the 80% versus 20%. 20% of us can understand and read a book and pass tests, understand the knowledge, and go forward. The other 80% will look at it like, "Oh, I understand it. I get bored, I don't like it." That's where it will be able to help from that standpoint.
I know it's scary for a lot of us that have spent like us, we've spent 30, 40 years gaining all this knowledge. For somebody to go to college for four years and they get out, they say, "What do you mean everything that I learned in four years, now it's open for everybody?" I'm like, "Well, it was going to get that way sooner or later."
That barrier to entry for anybody across the board, especially that 80%, if they want to engage, is there. I'm not saying people are going to become rocket scientists, but the thing is now, if I want to be a rocket scientist, I have a system that I can engage with and can teach me and tell me about rocket science information from that standpoint. And not get frustrated because I'm asking the question 25,000 times, or get frustrated because they have to explain it to me like a kindergartner. That's where AI is a differentiator for pretty much everybody on this planet like I said if they learned to engage with it.
In 2023, we saw all this talk about AI bias, and we would ask, how confident are you in the current mitigation strategies to try to ensure neutrality?
I'm not confident in them because most of them are all policies. I actually did a temp tech insight on it and broke down every policy there. It's like to me what you have is the large corporations in Washington creating rules to put like us, the lower people in the industry out of the AI business. There is nowhere in the world that basically a small company or medium-sized company goes through the bureaucracy red tape that they want you to go through in order to have an AI system online.
The other thing is they're not paying attention to where AI actually started from. AI didn't start at OpenAI. It didn't start in a corporation. It started in the open world, the open-source world. And that means you, me, or anybody can pick up open-source AI source code, compile it, and create an AI.
It doesn't prevent that from happening. It doesn't prevent the bias within that data from happening. That's the problem; the only way we're going to be able to solve it is if we re-understand how we implement AI. I think we have the tools and the capabilities that we can use. They're just not being talked about right now, or they may not be understood.
I wrote a paper on blockchain, or open ledger, for that matter. How to use the open ledger Hashgraph, not blockchain but the Hashgraph, in order to secure transactions within an AI algorithm. We have TPM, the Trusted Management Process, which is based on the computer CPU and GPU. It's got a serial number in there. GPU and CPU do have serial numbers.
Why not tie Hashgraph to those numbers and then to the AI core? Whenever the AI core thinks or gets information, you attach a cert to it. That cert is now immutable and basically traceable throughout the entire ecosystem of AI itself.
It sounds simple and very easy to do, but I know it's not. It's a whole paradigm shift on what we'd have to do, but if we continue to try to think that policies are going to prevent bias, we're just basically hoping for something that's never going to happen.
Now, OpenAI has done some things I think that may help with that, and that's what they call GPTs. So, you have the larger OpenAI model that has a lot of data in it. And then right now, what they introduced a couple of weeks ago were these smaller GPTs that individuals can inference data that's non-bias if they choose to.
Of course, you can put biased data in if you want to, but if you choose not to put biased data in there, now others can use that particular GPT in order to learn and do things without prevention, without the issue of running into biased information.
But that's after someone has curated that model or knows the person who created the model. I created a model for Unreal. Off of all my best practices, I created a Unreal GPT that goes over all of the lessons learned that I've had through the years, but it's only technical knowledge. It's black and white. It has nothing, Tim, talking about Unity or some other company that doesn't do a great job. It's just basically straight, Unreal; we call it best practices.
Now, if somebody wanted to use that and plug it into their AI, they would have a non-biased view of Unreal's best practices. It wouldn't be 100% because maybe I didn't get the last .X version in there, but it would be an unbiased version. I think that's one of the things that OpenAI has done a great job at with this new GPT.
I also wanted to mention intellectual property and get your point of view on that because I am teaching DALL E to my kid and if you ask draw me a picture of a kid playing the Nintendo, he's going to tell you, "No, you can do that. I'm not allowed to." But if you said, "Draw me a picture of a kid playing with a portable gaming console," what you get is a Nintendo Switch. And a highly recognizable Nintendo Switch to the point that it's unusable because it's obviously… so do you think there is a way if your dealings with AI that you come across solutions that are really protective of IP or is this the problem we're going to have forever?
It's a paper I haven't even released yet, believe it or not. I ran into the same problems teaching courses on AI, and DALL E was one of the ones that I looked at. I'm like, "Oh, this is great, but it's still inference with data that I don't understand." That's why I switched over to Stable Diffusion.
Stable Diffusion: most people go to a website. No, you can actually download Stable Diffusion and run it on your own hardware. Once you do that, what I've done, I took that to the next step, which is I downloaded Stable Diffusion, but I only inference my data. My pictures of my wife, and then I went and downloaded, I don't know if you know the art museum in Washington in a couple of different places, all public domain art, which is already public, inference a lot of that data in and made that into a model, and that's how I create my graphics now.
Anything that I create is based on public domain. That, in my mind, legally is copy-protectable and owned by me. Now again, if I go and use a Midjourney or something on the cloud, you have no control over that.
I think what's going to happen is as you get these individuals getting used to using AI, they know that they can use it on a local machine or whatever else like that or a private model, then you'll get to the point where people understand that this is mine, I own it, I created it, versus this is something that I just pulled out of Midjourney and it's got Tupac and everybody else on it and I got to go pay some royalties too.
I also wanted to discuss your point of view on blockchain and Web3, but we're running out of time.
Can you tell us where we can find your articles and where you publish your content?
Yeah, most of my articles are on LinkedIn. I've been putting in there for a while, trying to figure out a new way to do it. Actually, I just came back from a meeting this weekend in California, and my partners want me to NFT my articles. They're like, "Well, why don't you NFT them?" Because we have a new organization that we're working on that's protecting music or protecting audio and music for musicians, and one of the things we're working on is also protecting documents.
I might end up doing NFTs, but right now, LinkedIn is the best way to get it, I post one or two a week depending on what's going on and if it's an emergency or something I feel that I think people should know about, it's an instant one that comes out, that comes out that same day. But mostly LinkedIn right now, and again, that'll change over time, but LinkedIn is the best way to catch me.
Tim, looking back on 2023, what do you think you'll remember most?
The transition of compute and the industry itself. I've been around for a long time, and I've seen compute change from one to another. To be here today, where we're moving to natural language processing, which is the new version of a computer, from my point of view, is one of the biggest things that I thought I would never see in my lifetime.
What I mean by that is today, we use a keyboard, mouse, and an interface. When Alexa and Siri and Google Home came out years ago, it was the view of being able to talk to a computer and get what you need. Oh, I need directions here and it'll tell you and display that. We got it halfway.
To have a system that could actually do compute and analyze things for us, it couldn't do. Natural language processing opened a gate or the door for everyone because it speaks English, it speaks Spanish, and it speaks any language that has been programmed to speak and understand. I think that in itself is a change, a whole new change in compute. It's like the next generation of compute if you ask me.
We're not even in the next gen, we're just at the cuff of getting there. To be able to see, say somebody with no knowledge of computers, be able to sit down and talk to a computer and it give them information. It's like, "Hey, I want to have a root canal." You usually go to a dentist to get that information, but AI right now can explain to you a root canal. It can't perform it, but it can explain to you the entire steps and all the different drugs and everything else that a doctor or a dentist doctor would be using on you.
I think that in itself comforts people because we all want to know what we don't know when we're about to go do something that is important to us, whether it's health-related, safety-related, or anything that is going to be personal. That paradigm shift of being able to shift where anybody can use a computer is probably, like I said, the best thing that happened this year, if you ask me.
Thank you. And now the question that you expected.
My shout-out basically is I'm going to say, goes back to my high school. Mumford High School is where I went to. I graduated back in 1988, and I went off to college and everything else. But the thing is, most people in my demographic, in my city, and everything else think they can't do computing technology.
I want to show people that basically, hey, I went that road from high school, grew up in the city, grew up in the ghetto, whatever you want to call it, everything else, and I made it all the way to this point. I think everybody else can do the same thing. We have to open up our eyes, our hearts and our minds to be able to take in the knowledge. It's not the world against anybody; it's the world of people against just everything else.
I say Detroit versus everybody else. I think it's the world versus everything else if that makes any sense. If we all come together, we can actually get there. Shout-out to Detroit and Mumford High is the biggest thing that I want to make sure that people in the community know that there are jobs and there are careers other than manufacturing or retail that they can do, especially in the tech world.
We got to start seeing the blockchain companies, the AI companies, the immersive companies, real tech companies that we've seen for the last 10, 15 years, but yet here in Michigan, it's, oh, it's a dream or cartoon. And it's not really a dream or cartoon. It's real tech.
Well, thank you very much for that, and thank you very much for joining us to share your perspective as we are getting to the time of year when we look back on the year that it was. It's great you always managed to cut through the hype with your thoughtful analysis grounded in your decades of practice in the industry.
We're lucky to have you to close this season, and we want to thank you very, very much.
We wish you very, very happy holidays.
And, of course, thank you to our ever-growing audience. You can reach us for feedback on our website, buildingtheopenmetaverse.org, as well as subscribe to our LinkedIn page, we are there, too, our YouTube channel and most podcast platforms.
It's been a fun season. Patrick, are we going to be back for season six?
Yeah, I can't wait for that.
Next year, 2024, we will be back for Building the Open Metaverse. Thank you, Tim Bates. Thank you, Patrick, and thank you, everybody. Have a fantastic end of the year.