Building the Open Metaverse

Unpacking the AI Policy and Governance Landscape with Legal Expert Liz Rothman

Lawyer Liz Rothman discusses AI ethics, IP, and web3. She covers responsible AI development, transparency needs, generative content IP, removing training bias, and global coordination. Rothman sees promises in EU's AI Act but worries about overall regulatory pace when the pace of technological development is relentless.

Guests

Elizabeth Rothman
Attorney & Advisor
Elizabeth Rothman
Attorney & Advisor

Listen

Subscribe

Watch

Read

Announcer:

Today on Building the Open Metaverse.

Liz Rothman:

We can no longer really trust what we're seeing with our own eyes. Developing these technologies and regulating them in ways that will allow us to maintain trust in society, and not let some of these societal structures collapse. And maintaining personal autonomy, identity, and privacy during this time of change is a really, really big challenge.

Announcer:

Welcome to Building the Open Metaverse, where technology experts discuss how the community is building the open metaverse together. Hosted by Patrick Cozzi and Marc Petit.

Marc Petit:

Hello, metaverse builders, dreamers, and pioneers. You are listening to Building the Open Metaverse season five. This podcast is your portal into open virtual worlds and spatial computing.

Welcome back. My name is Marc Petit, and this is my co-host, Patrick Cozzi.

Patrick Cozzi:

Hey, Marc, always a pleasure to be here.

Marc Petit:

As you know, we bring you the people and the projects that are the leading edge of building the immersive internet of the future, the open and interoperable metaverse for all.

Patrick Cozzi:

And today, we have a special guest joining us on that mission. Liz Rothman is a lawyer specializing in topics on regulation, ethics, and safety for AI, Web3, and XR. She's also deeply involved in the community at the Metaverse Standards Forum and xrsi.org.

Marc Petit:

Elizabeth, welcome to the show.

Liz Rothman:

Thank you. Thank you for having me.

Marc Petit:

As you know, in this show, we like to hear from our guests about their journey to the metaverse in their own words. So, please.

Liz Rothman:

My journey to the metaverse, I guess, has been one of starting with the technology, and as an attorney, I moved into the emergent technology space about five or six years ago, initially in the blockchain space and then moving into AI and then XR.

Where this all got really interesting was the convergence of all of those technologies, and that's what really brought me to the real challenges that are faced in these digital environments, whether they're more traditional XR environments or looking forward to a digital future that we are all planning for in a metaverse. I would say that that's what brought me here was really noticing that where all of this comes together and converges is where the real societal issues will be in the future, where we really need to have a forward-looking perspective on regulation and the excitement of all of it as well and how these spaces will evolve and develop.

Patrick Cozzi:

Let's dive right into the thick of and talk about responsible AI. How would you summarize the key challenges and debates happening right now around AI ethics, safety, and governance?

Liz Rothman:

There are a lot of them, is how I would summarize it, every major company, every major government, international organizations, factions within governments, special planning committees. There are a million different facets of people trying to figure out or work on these issues right now. What I would do is categorize them into three general areas, and that is looking at the issues that arise with AI as an amplifier of human bias and human intention in that way, looking at how AI changes societal structures and dynamics, and then looking at existential challenges that are raised by advanced artificial intelligence and the view of our worldview as humans within it.

From the first point, looking at AI as an amplifier, we're really getting into issues of fairness and transparency and privacy concerns when we're looking at the bias issues within algorithms and trying to figure out how we will move forward with developing this technology in a way that will make it equitable and accessible for all in our society.

The changing structural dynamics, obviously, is a much bigger issue as we look at the human labor workforces, and all of these various committees around the world are trying to figure out what will happen if AI starts replacing human jobs and then the regulatory impacts of that. Will we try to stop that from happening? Will we just facilitate it in certain ways? How will we actually value these systems moving forward and value the humans that have traditionally been ingrained in them?

These are all going along with these broader regulatory efforts that are happening and looking into this future that we're all trying to hypothesize and see where we're moving. I think that those are very, very broad issues.

Then this third one of the existential crisis that we will all face with our place in the world and what human creativity is and consciousness even. All of these questions that we're asking ourselves right now and what our place is in this future.

Are we stewards of this technology, or are we guardians of the technology? Where is the human's place in the future of this, and how are we going to look at these longer-term risks and balances of power that will shift and change as we move into this new phase?

Marc Petit:

In your view, what's the most or what are the most pressing issues that we need to address right now?

Liz Rothman:

The privacy, safety, and trust issues that are coming up and arising at this time are incredibly important. If we can no longer really trust what we're seeing with our own eyes, that's a really big problem. 

Developing these technologies in ways and regulating them in ways that will allow us to maintain trust in society and not let some of these societal structures collapse in that way, which is possible if no one really trusts what they can see or hear anymore. Those are the really big issues, I think; maintaining personal autonomy, identity, and privacy during this time of change is a really, really big challenge.

Over many decades, we've seen those kinds of privacies erode over time as there are more tracking and devices that are collecting so much data on us, but that becomes even more important in this age of AI when data collection and the aggregation and use of that data can happen in a much faster amount of time.

Marc Petit:

As a lawyer, do you make a difference? Like deepfake or image compositing. I mean, we've had Photoshop for 25 years, so we were able to create fake images since forever. It's just a handful of people can do it now.

Does the fact that anybody can do it with a generative AI engine change the perspective?

Liz Rothman:

I think that the point that the technology has been there to do this for a long time is extremely valid, and it's something that we'll talk about intellectual property issues in a little bit. That's a very valid criticism of current stances on copyrightability as a lot of AI-generated output.

When an image was photoshopped, you could spend a lot of time making it so that it was impossible to tell that it was photoshopped. But usually, if you got down to it, you could figure out that it had been photoshopped. With the creation of these kinds of more, not only images but high fidelity video and content, then I think it becomes a little bit trickier of a question.

Whether or not that kind of content can be used for political purposes, can be used for harassing purposes. Those are real questions that need to face regulation now, where in the past, I think that the questions weren't quite as pressing as they're becoming right now.

Patrick Cozzi:

So Liz, you were previously talking about trust, and one way to build trust is with transparency, and you've written a lot about the requirements for training data sets and providing more transparency into AI models for generative AI, and there are calls for more transparency.

I wanted to ask, in your view, what's the level and type of transparency that you think is feasible and beneficial?

Liz Rothman:

I've written a couple of different things on this topic from different angles. One of them is that we need to have a shared language among lawyers and developers and people that can actually discuss these topics without it getting very sticky and where we're not really sure what is being disclosed and what's not being disclosed.

I think that there are certain baselines that can be put into place about transparency, but one of the really important things is honesty. A lot of times, you'll ask about some of these larger foundation models or the neural networks, and the answer that you received back is that, oh, there are black boxes. We're not sure of some of the training data that has gone into them, and now it cannot be removed, or we can't take it out. It gets to a situation where you're in a stalemate with people trying to figure out how to regulate it and the people that have developed it because it's either try to destroy the model or continue on as you are.

If we had a little bit more honesty around what is actually a black box. What could be determined? Obviously, there are some things that are not determinable and cannot be removed from these models, but that's a baseline start that we have this common language where we can move forward in a direction of actually having a conversation. It's not impeded by commercial interests or egos of the people that have developed some of these things that don't want to share the information.

The other side of that is I worked with the World Ethical Data Forum, and many stakeholders and developers on an open standard for responsible AI called Me-We-It. It's a standard designed to clarify the process of building AI by exposing the steps that go into developing it responsibly. The different kind of categories of data selection and ingestion of data creation and selection of algorithms and models, and then managing and testing that data and tagging to try to eliminate some of the biases and issues that go in when improper trading data is ingested from the start.

Marc Petit:

As you know, commercial entities compete and tend to hide their proprietary technology. So how do we balance the need for transparency and the need for people to have competitive separation and hide some stuff away from other people?

Liz Rothman:

How this is different is that we are going to need a lot of cooperation from all stakeholders and developers in this space, and we don't always have that. There will always be protection of algorithms, there will be protection of proprietary information, and commercial interest, and that's valid to some extent. But when the models are also widely available and can be used for other purposes and fine-tuned for other reasons, we really need some balance there.

Perhaps that comes from regulation; perhaps it comes from internal self-governance of these technologies or cooperation among tech companies to share that certain information. We'll see as everything continues to evolve.

Marc Petit:

Hopefully, we start to see the strong emergence of open-source software. Do you see open-source software or even maybe the mandates that certain layers be open-sourced as a possible way to reconcile all of those?

Liz Rothman:

Another thing that really hasn't happened in the past is having a mandate to have part of these algorithms at least be open source or the training data exposed. It's an interesting solution to some of the problems that we have if we'll get agreement on that across the board, and compliance with it is an open question.

Patrick Cozzi:

What kind of frameworks or standards do you think we need for handling IP rights and ownership for AI-generated content, whether it's 3D models, 2D art, code, text, and so on?

Liz Rothman:

There is no harmonization of laws across the globe, meaning that the laws are not all the same in every jurisdiction. In fact, in the UK, you can protect some AI-generated output, and in the United States, you cannot, and in many other countries around the world currently, you cannot, especially if it's autonomously generated or even generated with prompts if there isn't a human author that is behind the work.

The line between a human author and a non-human author is blurry and getting more blurry by the day, I think, as most of us would agree that are in this space.

So the question of 3D assets, I know, is very important to both of you, and it's really interesting because if there is an asset that is created or generated by AI, it would not be protectable unless there's significant human authorship behind the work. It's a good example of how this current framework that we have is not sustainable.

I've been advocating for some time for just a disclosure requirement, essentially, just disclose that the works are AI-generated and figure out the framework on the other end of “is it going to decrease the number of years of protectability?” What are the parameters? But it shouldn't be this question of if it is protectable; it should be how we're going to do this because I think we can all see the evolution from photoshopping, CGI, and now into synthesizing content with generative AI. It all tracks, it makes sense, and we can see where this is going.

If the framework is that you just cannot protect anything that is AI-generated with copyright protection in the US, that will create inevitable issues as that becomes every day more and more likely that everybody is generating this output.

Marc Petit:

What would it mean if we could, in fact, create IP protection on AI-generated content? Can you tell us what would be the implication of that decision?

Liz Rothman:

If you cannot protect it, then companies that have stakes in making money off of assets that are created by AI will put a human in the middle there somewhere, right? You're going to make sure that it's human-generated so that you can protect that output.

There's some inefficiency that will be created by that. Essentially, incentivization of the use of AI technology and figuring out how to integrate it in a more sensible way than putting up roadblocks to protecting this proprietary output in ways that would normally have been protectable if it was created by a traditional human author.

Marc Petit:

What's the impact of being able to put a copyright on something that's autonomously generated? Would that change the economics around content creation?

Liz Rothman:

If the company has been expecting to be able to protect all of their output, and most companies traditionally do protect a lot of their intellectual property as it's created, it will potentially change the incentivization to use AI to generate output.

You could have other reasons to incentivize the use of AI. And right now, part of that reason is just that it's new and interesting, and we're continuing to play with what's possible with this technology. But as we go into the future, I think it's going to be important to protect intellectual property in the same way that it's been important in the past.

Marc Petit:

I was wondering if the copyright issues, in fact, will affect whether jobs are protected or not.

Liz Rothman:

That's a question that I might throw back to you guys a little bit because I think it's interesting to consider all of the possibilities there; the use of AI to generate content is certainly changing things, right? It's very, very rapidly going to change things. There are many different ways to look at it, right? 

There's one where there are scarce resources, and we are competing for those resources. If we're competing with AI, then that means that jobs will be taken away by these technologies.

Then there's the viewpoint that these resources are not scarce and that there's more possibility and creativity that can come from using these generative AI tools to expand the ability of humans to create.

Certainly, that will change the structures, again, of specific jobs. I know if you look at the writers strike, there was concern over AI-generated content coming in there and then very specific rules that were hammered out in that contract around when it can and cannot be used and when it is used, when it can be designated as an important writer in the situation. Then the fee structures that go around that. So certainly, there are ways to regulate around it.

I think what we need to do is figure out what kind of future we are looking at and do we want here; are we trying to minimize the use of this technology, or are we trying to integrate it in ways that will maximize creative benefit all around?

Obviously, there are good and bad things on both sides of that and risks that are involved on both sides of it.

Marc Petit:

It could be that the situation is different for text like the writers than it is for 3D content and some very, very complex visual effects work that right now only there is scarcity. I mean, there's only a handful of people in the world that can actually do this work.

We, as technologists and particularly as optimist technologists, we see that the acceleration provided by AI is a good thing because it will allow the democratization of 3D. It will enable the metaverse because it will enable a lot of more 3D production. In the sense that it's an enabler, it doesn't take much away because there are not a lot of people who can actually produce good-looking interactive 3D experiences today. But I can see why the writers of a text have a different perspective.

It's not a real simple conversation, probably.

Liz Rothman:

I think you're seeing that very directly in a way that maybe the writers aren't right now and that you are seeing the expansion and the need for 3D content generation and the possibility that if more people can develop in this way, that it's going to increase the level of the playing field for everybody and that it'll create more interesting things in the space. That is a harder leap, I think, for a more traditional creative profession like writing. It's a harder leap to have that trust that that's going to happen, and who knows if it actually is, but it's a bigger question, and you could see why it's a little bit more of an existential fear on that side.

Patrick Cozzi:

We wanted to talk a bit about data usage and bias. I mean, my understanding is we can train models, and then models can be tuned.

When it comes to biased, illegal, or unethical data that's been used to train an AI model, I mean, how feasible is it to then fully remove its influence and impacts?

Liz Rothman:

From my understanding, from all of my conversations with developers, I don't think it is; it's not that feasible once it's been trained.

I've heard from different people that there are processes in the works, and there are post-processing filters that can help pull out some of this data, especially when artists were asking for their works and their style to be removed from training sets. But my understanding is that it is never fully removed in many of these models.

If we get to a point where it is possible to track and remove data from the weights, that will be an interesting time. But I don't think that we're there right now, which makes it a very interesting question from a legal standpoint because what are the damages?

If somebody suffers damages from one of these models, usually what would happen is you would have an injunction to stop the use of the model, which in many cases is not possible because the models have been released widely or leaked or have been released open source. At that point, how can you really quantify damages there from a legal standpoint? If illegal data has been used, then we're going to have to figure out a model to deal with that. First, we have to know what data was used, which is also a problem with a lot of these bigger foundation models.

Marc Petit:

What kind of a regulatory framework or audit process do we need to get responsible data sourcing?

Liz Rothman:

Using a transparent data lineage, where we can see what data has been used in the training.

There were discussions around using kind of FDA-style review boards to look at training data and see if that was a viable option for oversight boards on what kind of data is being used, bias detection mechanisms, and those are more technological solutions to the problem.

I think those are the main focal points for that kind of innovation.

Patrick Cozzi:

Do you think then in the frameworks and standards, should they be developed at a national level? Would it be global? I mean, the metaverse, we don't think it's going to have any frontiers.

Liz Rothman:

Big questions on the intellectual property front, on kind of everything, right?

There are also a lot of international organizations, international policing organizations, and NGOs; everybody's trying to figure out what are we going to do with these spaces because they are not within nation-state borders. It creates this problem of “Are we going to have an international treaty on AI regulation?”

What's happening right now is that some nations or some jurisdictions are just going forward with their AI regulation. As you take what happened with GDPR, you are going to end up possibly in a situation where everybody just follows suit on what happened in the European Union with AI regulation and what's happening right now.

Ideally, we would have a global structure to regulate technologies that are globally impactful. It is difficult to get giant nation-states such as the US and China to sign on to international agreements like that.

All of the myriad of regulatory efforts that are taking place right now are well-meaning, I think. Everybody is trying to figure out what we're going to do. Are we trying to regulate this like the IAEA did for nuclear weapons? What is the framework that we're going to use for AI regulation? I think it's all very well-meaning, but all that matters at the end of the day is how it comes together and how we can work together and have a cohesive agreement that will be followed by all the stakeholders in the industry as well.

Open questions, but yes, certainly, global regulation would be for the best, given the global nature of the problem.

Marc Petit:

What gives you hope and what gives you concern when you look at the way we're tackling those problems right now?

Liz Rothman:

I think the thing that gives me the most hope is the international conversation and the international dialogue that's happening right now. It was a conversation that wasn't happening for a long time and now is happening, and that's incredibly important, as I said before, where we take that conversation and how we bring it all together and converge into these digital spaces, especially going into the future will be crucial to how effective this regulation will be.

The international conversations and dialogue that are happening are very hopeful. For some of these things, it's very difficult to put the cat back in the bag, but that's already happened, and we've moved past a point where we can actually go back on some of the things that have happened.

As technologies develop even further at a faster pace, if regulatory efforts don't catch up to them, then we will end up in a situation where it's very difficult to clean up the mess that has occurred.

Patrick Cozzi:

Let's talk about xrsi.org. This is a global nonprofit standards-developing organization with a focus on promoting privacy, security, and ethics and immersive environments like VR and AR to help build safe virtual environments. And you're an advisor.

Tell us about your involvement.

Liz Rothman:

Yeah, I've been involved with XRSI for a few years. It is run by Kavya Pearlman, who is the former head of cybersecurity for Second Life many, many years ago, and has very interesting perspectives.

We are very multifaceted. We have a medical XR division, we have a child safety project, and then we work on general privacy and safety issues and advise governments and organizations all over the world on privacy and safety issues, which are now being looked at by a lot of people, which had not been the case in the past; that's really nice to see that these issues are much more front and center.

What we offer that many cannot is that we're a nonprofit and looking at these issues from a very privacy, safety, and inclusion sort of perspective that is not self-interested and profit-motivated.

Marc Petit:

You're also a member of the board alongside Mr. Cozzi here at the Metaverse Standards Forum. 

Congratulations.

Liz Rothman:

Thank you.

Marc Petit:

Why did you pick this organization to get involved?

Liz Rothman:

Well, about a year or so ago, the Metaverse Standards Forum started to come together, and I am representing XRSI on the board for the Metaverse Standards Forum, but it is a really unique effort to bring together so much expertise from so many different areas. Whenever I'm talking to anybody from the forum, I learn something about their perspective and their viewpoint on this.

The word standards, in general, has so many different meanings and so many different disciplines. I think it is so crucial and important that these things can come together in a way that is, again, in a sense where there is no profit-driven motive. It's coming together to try to build these standards and make a more efficient and safer metaverse in the future.

Marc Petit:

Patrick and I have been deeply involved in this organization, too, having a place to discuss and where as CEO and vendors can exchange ideas and align. I mean, the metaverse is going to force a conversion of many, many fields, and therefore many standards will have to converge.

Then because it's going to be the interest of every one of us, the stakes are very high around its implication around safety, privacy, and all those things.

We think it's an important effort.

Liz Rothman:

I think it can't be underestimated how much at this time something like that is needed where different disciplines come together and can discuss these issues and really come up with plans to move forward and how much the convergence of these technologies will change our digital futures as well.

Patrick Cozzi:

We had a conversation with Neil Trevett and crew on season one, episode two of this podcast that helped spark the forum, which is pretty cool history. Everything that we've talked about today and all that that you're doing, you're also a certified blockchain solution architect.

Could you tell us a bit about how you're envisioning Web3 and its future?

Liz Rothman:

Web3 integration into immersive platforms and digital environments, my viewpoint right now, is that the crucial part is identity, validation, and provenance of content.

Where did this come from? Who is this person I'm interacting with? That will be incredibly important for financial transactions and digital environments; that'll be important for personal transactions.

My personal standpoint right now on the integration of Web3 into future metaverse and into digital environments is that that's where the crucial use of cryptography and of blockchains can come in, especially at early stages of development here.

Marc Petit:

Where do you think Web3 goes from here?

Liz Rothman:

The word metaverse is even a problem, right? Because of the hype bubble that happened over the last couple of years in the NFT space and the crypto space. To me, it's funny that, on one hand, oh, the metaverse is dead; on the other hand, every Fortune 500 company, every government, every international organization has a metaverse plan right now and is meeting to try to figure out how this is all going to unfold. To me, it's a fascinating point where when you hear the word metaverse, it automatically brings you to this place of, oh, NFTs and these virtual games that never actually panned out. But in reality, what we're seeing is more of this move towards either whether you want to call it an industrial metaverse or these moving in a direction of developing technologies of the future.

Then the word, even as I said, we get distracted by things. The word even changed again with the Meta announcements this past week, where now everyone's like, "Oh, there's a metaverse again." This is a thing.

What I think about NFTs and the use of Web3 in the future is that it'll be different than it was in the last couple of years. There are hype bubbles and speculation. There's a hype bubble right now, in my opinion, in funding for AI companies. There's a socially generative AI. There are a lot of hype cycles and bubbles that happen, but what we need to keep our eye on is what's actually developing and where the best use cases are.

There is certainly a good use case for blockchain technology in digital environments for these kinds of privacy, safety, and trust concerns that are arising.

Marc Petit:

I'm personally very interested in the notion of smart contracts to implement new business models, traceability of content, secondary payments for artists, and all of this. I think that aspect of it, whether it needs to be implemented in the blockchain is a very different conversation.

Blockchain has come with a lot of issues, performance, technology, acceptance regulation, but the notion of abstracting identity from platforms, the notion of smart contracts, I think there's very, very valid.

We can't wait to have them in a way that's practical and works with our regulations. I don't know if that is Web3, but that's what I think that's what we would like to have.

Liz Rothman:

The right conversation to have is one of, where are these technologies useful? In the blockchain bubble, it was blockchain for everything, right? We're going to use this for every technology, every purpose, and it's not fit for every purpose. It's not even fit for most purposes. But when you're talking about something as crucial as identity, the slowness of a blockchain maybe doesn't matter as much, or you're looking for more the security there.

When you're talking about royalty structures for artists, that can probably be handled in other ways. The push to give royalties to artists in the way that NFTs did. Maybe push that conversation along in a way that was helpful to bring about some change in that structure.

But I don't know that it needs to be handled on a blockchain. I think that there are good use cases for all of these technologies. They're not always the ones that catch the biggest hype cycle, right? I think that we're seeing that right now with generative AI for sure.

All the things that are getting funded right now are not necessarily going to be the main use cases for generative AI in the future.

Patrick Cozzi:

So Liz, our traditional final question, which is also one of our favorites, which is the shout-out. Is there a person or organization you'd like to give a shout-out to?

Liz Rothman:

Well, always, XRSI, as a nonprofit, can use your donations to continue all of the work that we do in digital environments.

Marc Petit:

That wraps up our discussion on AI ethics and governance. I'm very appreciative that Liz Rothman joined us to share her expertise today.

Liz, you offered us an admirable blend of legal insight and ethical considerations, and I think they will help our audience tremendously. Your perspective represents the kind of nuanced approach that we need.

On one end, encouraging innovation while also proactively addressing the risk. We need a good mind like yours. They have an important role to play in developing balanced policies in front markets, not just technology. We're very happy that we could bring you on this podcast today.

Patrick and I believe that this conversation is essential as we collectively shape the future of AI and of the next version of the internet in our society.

Thank you very much for being here with us and lending us your knowledge.

Liz Rothman:

Thank you so much for having me. It's always a pleasure to see you guys.

Marc Petit:

And, of course, a huge thank you to our ever-growing audience.

You can reach us for feedback on our website, buildingtheopenmetaverse.org, as well as subscribe to our LinkedIn page and our YouTube channel, and you can find us on all podcast platforms.

Thank you, Liz, very much again, and thank you, Patrick. We'll see you soon for another episode.