Today, I have the privilege of introducing our next panel on policy, legal, and regulatory issues in emerging AI autonomous software agent technology.
03:33:08.000 –> 03:33:18.000
Originally, or excuse me, unfortunately, the original moderator for this panel, Professor Margo Kaminski, was unable to make it today. So, lucky for us.
03:33:18.000 –> 03:33:23.000
We get to hear a second panel moderated by Colorado law professor Harry Surden.
03:33:23.000 –> 03:33:33.000
Thank you. Yes. Joining him in today’s discussion are David Levine, Paul Lin, and Callie Schroeder.
03:33:33.000 –> 03:33:39.000
David is the Associate Dean of Faculty Development and Law Professor at Elon University.
03:33:39.000 –> 03:33:47.000
His scholarship on the implementation and regulation of new technologies has been recognized by policymakers around the world.
03:33:47.000 –> 03:34:02.000
Paul is the co-founder and CEO of Returned.com. He has over two decades of entrepreneurial and executive experience which he leverages with extensive knowledge in data science, AI, and business intelligence.
03:34:02.000 –> 03:34:12.000
Callie is the senior counsel in AI Human Rights Program Lead for the Electronic Privacy Information Center, where she focuses on international privacy developments.
03:34:12.000 –> 03:34:18.000
As part of her work, Callie has published articles on privacy risks within AI frameworks.
03:34:18.000 –> 03:34:25.000
Please join me in welcoming our fabulous panel.
03:34:25.000 –> 03:34:35.000
Well, thank you so much for coming to this panel. So in this panel, we’re going to look at the law and governance challenges and policy challenges presented by autonomous agents.
03:34:35.000 –> 03:34:56.000
Newly enabled primarily software agents that can go out and do things on their own. And at any law school, there’s a wide range of reactions to any sort of new technology. One is, of course, if in your law school, pass a law, and then in which case you have to figure out, well, what kind of law, a regulation?
03:34:56.000 –> 03:35:01.000
A regulatory agency? Do you do it ahead of time, try and prevent things?
03:35:01.000 –> 03:35:18.000
Or do you do things after the fact and let liability law, let people Sue, another option, which a lot of people are in favor of, is Do nothing, right? Maybe the law has little or nothing to say about things kind of
03:35:18.000 –> 03:35:31.000
Let things play out. There are also other options, including what they sometimes called soft law, where you pass regulations that have industry players develop standards.
03:35:31.000 –> 03:35:49.000
Or best practices, require them to do that, but the law doesn’t really get involved There are other mechanisms. You can do indirect regulation through economics or taxing. You can do things involving encouraging or changing social norms.
03:35:49.000 –> 03:36:04.000
So there’s a wide variety of things on the table, not the least of which is either do nothing now and wait and see or do nothing ever or do something at a certain point in time. So we’re going to, and then of course there’s
03:36:04.000 –> 03:36:10.000
The different levels, the federal level, the state level, the local level, or some combination in between.
03:36:10.000 –> 03:36:25.000
So today we’re going to see to the degree to which the new capabilities of autonomous systems, which I demonstrated in my keynote, present novel challenges Either now or in the future that have to be responded to or should be usefully responded to
03:36:25.000 –> 03:36:30.000
By the law. So I’m going to throw out the first question.
03:36:30.000 –> 03:36:44.000
The Cali here. Kelly has been an all-star, have been on, this is Cali’s third appearance this week in the Silicon Flatirons event. So I’m really grateful to have Kelly here.
03:36:44.000 –> 03:36:51.000
So Kelly, we were talking about this yesterday in a healthcare panel.
03:36:51.000 –> 03:37:16.000
There are a lot of existing laws out there. Do we necessarily need new laws given the challenges posed by autonomous systems. And this is sometimes referred to inside law school as the law of the horse question, reflecting a famous debate between Harvard law professor Lawrence Lessig.
03:37:16.000 –> 03:37:30.000
And Judge Frank Easterbrook at the dawn of the internet, where they debated whether we needed internet specific laws or whether existing laws like contract law and tort law were adequate to govern the challenges of the internet.
03:37:30.000 –> 03:37:52.000
What an open-ended question. Yeah, so for a little bit of background context, I work at Epic and Epic is an advocacy organization. So I come at all of this from a perspective of looking primarily at like human rights, human impacts, privacy issues. I have a perspective when I’m looking at this stuff.
03:37:52.000 –> 03:38:03.000
Before I worked at Epic, I also did compliance work where I was advising clients and companies on how they could make sure that they were like documenting all they needed to and doing what they needed to do to meet.
03:38:03.000 –> 03:38:17.000
Local and state and federal and international laws. So I’ve engaged a lot with the spectrum of hard law, soft law, guidelines, internal regulations, codes of practice, that sort of thing quite a lot.
03:38:17.000 –> 03:38:34.000
My perspective on AI is that AI is AI itself and then autonomous systems as well In many cases, we’re not necessarily seeing new risks and harms. It’s that existing risks and harms are escalated in scope and scale.
03:38:34.000 –> 03:38:47.000
And autonomous systems essentially do that exact same thing as a form of AI. So if AI was an escalation of the scope and scale of issues and risks you have to look at, autonomous systems are an additional escalation within that.
03:38:47.000 –> 03:38:58.000
It’s possible that these things could be addressed at least in part through existing laws, because there are laws that apply to AI already. There’s consumer protection laws, there’s product fitness laws.
03:38:58.000 –> 03:39:08.000
There’s laws around accuracy in presenting what your product can do and making in some industries, there are necessary audits you have to do, things like that.
03:39:08.000 –> 03:39:15.000
Obviously, there’s contract law, there’s antitrust, there’s all these other areas that can play into it.
03:39:15.000 –> 03:39:24.000
A problem we have with that is that it it becomes almost an eating the elephant type situation where each law may be able to tackle a part of the problem.
03:39:24.000 –> 03:39:37.000
But that doesn’t mean that the whole thing gets addressed. It doesn’t mean that every issue can be adequately covered. And frankly, even where we have existing laws, those laws only matter when they’re being adequately enforced.
03:39:37.000 –> 03:39:46.000
A real lack of enforcement on AI for a lot of reasons. One is that AI companies, not all of them, but many of them, especially the largest ones.
03:39:46.000 –> 03:39:55.000
Have enormous resources so they can put a lot of time and money and lawyers into arguing why their product doesn’t fall under a specific regulation.
03:39:55.000 –> 03:40:02.000
And maybe they win, maybe they lose, but they will burn out a lot of resources of whatever body’s trying to enforce against them.
03:40:02.000 –> 03:40:13.000
The other issue is sometimes it’s really hard for the bodies that are in charge of enforcement to make sure that they’re adequately understanding a new technology.
03:40:13.000 –> 03:40:19.000
So there can be some intimidation factor there where in trying to apply the law, in trying to enforce the law.
03:40:19.000 –> 03:40:27.000
They can be kind of bamboozled by technological language and lose confidence in their legal arguments because of that.
03:40:27.000 –> 03:40:34.000
There’s a few ways to address it. We’ve talked and looked at several proposals for AI regulations.
03:40:34.000 –> 03:40:45.000
Some at the federal level, a lot at the state level. We talked a lot to state level legislatures and staff about what they’re proposing and whether it’s going to be effective and how it can be improved.
03:40:45.000 –> 03:41:00.000
I also work on the international space, so I track a lot of that. And as Harry said, there are several countries that openly have said that their strategy with AI is to wait and see what happens. There are some that proactively want to address things, but sometimes
03:41:00.000 –> 03:41:10.000
That because the technology changes so quickly, it means that the law themselves get rewritten like at the last minute. So they’re maybe not as strong or good as they could be.
03:41:10.000 –> 03:41:17.000
I’m not specifically calling out the EUAI Act, but I am a little bit.
03:41:17.000 –> 03:41:31.000
There’s a lot of different approaches and frankly, it’s so early days in the regulatory space about AI specific laws that it’s hard to see how effective those are yet. And it’s hard to see what’s causing problems and what’s not.
03:41:31.000 –> 03:41:40.000
I do push very strongly that what needs to happen right now is focusing on enforcing the laws we have and then we can look at where the gaps are.
03:41:40.000 –> 03:41:46.000
And how we can improve protections and make sure that things are meeting the standards they need to meet.
03:41:46.000 –> 03:41:59.000
It’s kind of open grounds right now on how that’s going to move forward. Yeah, great point. And one thing I want to emphasize that you brought up In law school, we always talk about the law on the books versus the law on the ground.
03:41:59.000 –> 03:42:14.000
And there are hundreds of thousands, maybe of millions of laws and regulation, only a tiny subset of which are ever known about or enforced. So adding another law in the book doesn’t necessarily change anything in the real world if it’s not being enforced.
03:42:14.000 –> 03:42:31.000
Paul or Dave, do you want to weigh in here? Go ahead. Yeah, so just a brief background. Return.com is a mobile application. I started the company, I’m the CEO, co-founder with both my wife and a couple other executives.
03:42:31.000 –> 03:42:38.000
We started the company as knowing that for me personally and my wife, we have what we call our box of shame.
03:42:38.000 –> 03:42:44.000
And it’s all the items that we intend to return that we just never get to going to Target, Costco, all the items that are basically now sitting in the garage.
03:42:44.000 –> 03:42:51.000
So we wanted, I personally wanted to create an AI agent to really solve the problem for myself.
03:42:51.000 –> 03:42:58.000
And that AI agent for me is basically taking action on behalf of me to either get the shipping label, call customer service.
03:42:58.000 –> 03:43:03.000
Or even then dispatch a driver to grab the item to take it back to target.
03:43:03.000 –> 03:43:09.000
So it really was return.com that we really are foundationally AI based.
03:43:09.000 –> 03:43:19.000
Personally, I have… done auto coding. I’ve already used approximately 2 billion tokens off of Cloud 3.5.
03:43:19.000 –> 03:43:25.000
Really get a comprehension of what is now possible and what is not.
03:43:25.000 –> 03:43:33.000
My concern with the law is that creativity, there’s really now not a limit of what could be done.
03:43:33.000 –> 03:43:44.000
And the nuances of what could be done or what is created I fear that in that creation of something, the law can’t keep up.
03:43:44.000 –> 03:43:52.000
As an example, one of the things that we created back in October was the first time that we released it. We call it our AI Digital Twin.
03:43:52.000 –> 03:44:04.000
And so I have already synthesized, and I don’t know if it’s appropriate for me to play some audio clip, but I have synthesized my own voice In that October-November timeframe called J. Crew.
03:44:04.000 –> 03:44:13.000
To act on my own behalf to then grab the shipping label, effectively calling customer service and taking that. Feel free to share that if you want.
03:44:13.000 –> 03:44:22.000
So I’m going to play only parts of it. But… How are you doing today?
03:44:22.000 –> 03:44:28.000
Hi, good afternoon. Happy holidays. This is Andrina from JCU. How are you doing today?
03:44:28.000 –> 03:44:36.000
Thank you, Andrina. I have order number 2510541380. Could you please look that up for me?
03:44:36.000 –> 03:44:43.000
Okay, can you let me know once more? I do apologize about that. Okay, I’m going to fast forward just one second.
03:44:43.000 –> 03:44:50.000
That was ordered on November 20th. Could you please provide the return instructions and any necessary return labels?
03:44:50.000 –> 03:44:58.000
Okay, sure. I can quickly do a return label. I’m going to stop there. So basically, it’s acting on my behalf.
03:44:58.000 –> 03:45:11.000
And so I don’t know if there’s laws to regulate what an AI digital twin, meaning representation of me, but really I created, we created the company to represent consumers So that they don’t have to call customer service.
03:45:11.000 –> 03:45:23.000
I don’t know what law can regulate that. All right, now this is, so I love this because one thing your example highlight, first of all, it’s a very vivid example of what is possible today was not possible three years ago.
03:45:23.000 –> 03:45:33.000
This is phenomenal. Another example is a possible benefit that these autonomous systems might have acting on our behalf.
03:45:33.000 –> 03:45:54.000
And we can also imagine that same technology being misused. So it provides a risk. If your digital twin goes out and does a bunch of other things you didn’t authorized. And then finally, you probably want some clarity as a business person one way or the other about what the government
03:45:54.000 –> 03:46:09.000
Things you should be doing, right? So I did, I was very fortunate here in the last, really at the end of the year, I did meet our top congressman in the bipartisan AI committee.
03:46:09.000 –> 03:46:32.000
Actually played the audio clip and he didn’t know what to do. And also he didn’t know how we created the technology to enable that outcome. So talking to the top congressman who, of course, sets our laws he was… You know, speechless. Yeah. So, and another thing I should say in our law of the horse debate
03:46:32.000 –> 03:46:38.000
The background of the law of the horse debate was, well, the internet, so said Judge Euster.
03:46:38.000 –> 03:46:45.000
You know, back in 1995 doesn’t really present anything new that we didn’t see in the physical world. And Judge Leslieg said, yes.
03:46:45.000 –> 03:46:58.000
It does, actually. There were genuinely new, like, for instance, a child can disguise their age on the internet and they can’t really do that in the physical world. So there are novel situations that arise. And then here, we have a totally novel situation.
03:46:58.000 –> 03:47:24.000
Presented by these autonomous agents where somebody can perfectly mimic your voice and also carry out autonomous conversations on your behalf. So I think this does suggest Something new has happened. Dave, did you want to weigh in? Yeah, thanks, Eric. First, thank you for the invitation. This conference has been run exceptionally well, and I know how hard it is to put it together. So compliments to your staff.
03:47:24.000 –> 03:47:40.000
And I’m going to be pirating some of your ideas at Elon. So thank you for that as well. Lastly, I’ll say my wife went to Boulder, and she is not with me because we have children who impede fun in a variety of ways, including on going on trips. My children are wonderful.
03:47:40.000 –> 03:48:05.000
But they’ve stopped this trip. So it’s good to be here. Yeah, I mean, so The way I’d respond to it, aside from just dominating the entire discussion with theoretical discussion of the law of the horse, which I’m tempted to do, but which I’m not going to do, and I have at least one of my former students in the room here today who has made the decision, yes, good to see you, Dan. So you might remember this from internet law, but I’m going to move past that for the moment.
03:48:05.000 –> 03:48:31.000
And suggest that I agree with Callie, right? I mean, there’s a lot of regulation that isn’t enforced. There’s also a lot of rush to regulate. And without question, there are existential risks associated with the technology that people like Jeff Hinton, who are minor players in the space, have said are real concerns. So logic would dictate that policymakers
03:48:31.000 –> 03:48:58.000
At the national and international level would want to take those seriously. However, I think they have done that to the exclusion of what I consider to be low-hanging fruit. And I’ll use Paul’s example as a good one. I mean, this is this is a wonderful use of the technology in the sense that as your first panel pointed out, there’s a lot of mundane kind of step-by-step practices that human beings may not want to do once they have learned
03:48:58.000 –> 03:49:13.000
To do those things, and I will double down on what my colleague said on the first panel about law students And indeed, human beings haven’t been able to do those things in the first instance. But then I wonder, and this is an innocuous example of it, Paul, but I wonder whether notice
03:49:13.000 –> 03:49:23.000
Should play a role right there. In other words, we’ve seen this from Google on down, that the use of these agents when individuals or entities are not aware that they are dealing with a bot.
03:49:23.000 –> 03:49:29.000
Find it offensive. Now, it might be that as sophistication develops the marketplace.
03:49:29.000 –> 03:49:38.000
Going to the law of the horse, right? Might regulate that in practice. We kind of know, I think many people in this, well, you pointed it out, of course, at the beginning, Paul, but many people might have heard and said, well.
03:49:38.000 –> 03:49:52.000
I’m talking to a bot. But to the extent that we have less sophisticated or less knowledgeable consumers, the question of notice becomes a real one. And of course, contract law school procedure law is rife with those kinds of examples.
03:49:52.000 –> 03:50:17.000
On your law of the horse point, I don’t know that we need new law to deal with notice, but I do think, as Callie pointed out, policymakers having a hard time catching up, lacking expertise, and quite frankly, the destruction of expertise within state, federal, and international government that’s been documented quite well by my good friend Lorelei Kelly at Georgetown, means that we need to do a much better job in training our legislators
03:50:17.000 –> 03:50:35.000
To understand these issues, putting aside the political consequences that we might talk about. So I think notice is a place where you could go with this, which is low hanging fruit, as well as some of the other issues involving misinformation and cyber stalking and cyber harassment and areas like that, which I know Epic has worked on and which states like North Carolina, where I live are as well.
03:50:35.000 –> 03:50:52.000
And can you say a little bit more? I know you’ve thought a lot about trade secrets. Do they raise different issues in the autonomous realm or just AI generally. Right. So Harry’s now baited me with trade secrecy because that’s where I focus. And I’m going to try to do this in a short way. The short answer is yes.
03:50:52.000 –> 03:51:07.000
Trade secrets for those that aren’t familiar, are information that’s valuable because it’s not known by a competitor. It operates in the competitive space. And the classic trade secret is a formula for Coca-Cola, which remains a trade secret to this day.
03:51:07.000 –> 03:51:17.000
Trade secrecy, as I’ve talked about for 20 years, operates as the most powerful intellectual property law regarding access to information, not use.
03:51:17.000 –> 03:51:27.000
But access to information. Merely accessing trade secrets can raise enormous issues of liability. Now, they can be licensed, they can be shared voluntarily.
03:51:27.000 –> 03:51:35.000
But absent that kind of voluntary or license-based scheme, the risks associated with getting access to a trade secret are massive.
03:51:35.000 –> 03:51:44.000
As a result of that, we’ve seen an increasing move as a result of changes in patent law, but also the power of trade secrecy to operate in this space.
03:51:44.000 –> 03:51:50.000
In the software space using trade secrecy across the board. And without getting into the weeds, right?
03:51:50.000 –> 03:51:59.000
Code is not reverse engineerable. It can be tested, right? It can be manipulated. But reverse engineering code is rather difficult, of course, unless it’s shared.
03:51:59.000 –> 03:52:17.000
So you’ve got code operating as trade secrets. You also have data sets that are trade secrets themselves. And so over the last 20 years, I’ve been on almost a single person warpath trying to allow governments and policymakers and civil society groups to have a more nuanced understanding of when we need trade secrecy in order to innovate.
03:52:17.000 –> 03:52:24.000
And when the use of trade secrecy conflicts at a high level with broader values, like, for example, what society might want.
03:52:24.000 –> 03:52:50.000
Versus what innovators might want to see happen. And as one of the panelists said in the first panel, we’re dealing with a large uncontrolled wildfire experiment in the use of artificial intelligence in what I consider to be the indiscriminate destruction of the federal government. And part of the problem is that we simply don’t know what the code is or what it’s doing. Now that puts aside whatever lack of transparency might exist at the human level.
03:52:50.000 –> 03:52:56.000
But the fact is trade secrets operate in this space. The result is, as I’m sure most people in this room know.
03:52:56.000 –> 03:53:04.000
Openai is ironically not open, right? And while they can be open and other entities can be open, they choose not to be.
03:53:04.000 –> 03:53:10.000
The result of all of that is that it’s hard to understand even what the technology is capable of doing.
03:53:10.000 –> 03:53:17.000
Absent relying upon entities that are creating the technology, which as we know, as compared to the internet, is not the federal government.
03:53:17.000 –> 03:53:44.000
And not institutions for the most part of higher education, but private sector operas. So the result of that is not to say that trade secrecy is bad, not to say that we don’t need it, but we really don’t have a good nuanced understanding of when it might be useful And I’m very concerned that its use here is allowing for lots of quick first to market and first mover advantage benefits, which are understandable from an innovation standpoint, but which are leaving in the dust the broader societal concerns
03:53:44.000 –> 03:54:00.000
That the first panel talked about, and I suspect we’ll be talking about here as well. Yeah, this is a great point. So one additional option on the table is changing existing laws that might be inhibiting various values that we want in society, maybe trade secrecy or when there’s
03:54:00.000 –> 03:54:04.000
Public benefit. Kelly, I think you had something to say earlier.
03:54:04.000 –> 03:54:23.000
Sorry, I always have more thoughts on these things. One thing that I think may be important when we’re talking about how we regulate, where we regulate these systems is at what point in the development and use are we regulating? Because there’s very, very different questions depending on how a model is built.
03:54:23.000 –> 03:54:36.000
What it’s trained on, what its intended use is. So for a quick example, OpenAI is a general use AI structure. It’s intended to be able to use in lots and lots of different ways for lots of different purposes.
03:54:36.000 –> 03:54:40.000
That’s part of why they train on such an unbelievably massive data set.
03:54:40.000 –> 03:54:51.000
There’s strong suspicion that there’s strong suspicion I don’t know for sure if we’ve been able to verify that they’re building their training data sets by mass web scraping.
03:54:51.000 –> 03:54:59.000
A problem with that is that you have to look at laws around like, do you have the rights to use all the information that you’re taking into that data set?
03:54:59.000 –> 03:55:03.000
What kind of responsibility do you have to curate that and make sure that there’s accuracy there?
03:55:03.000 –> 03:55:12.000
What kind of responsibility do you have on, I’m so sorry to bring down the room, but especially if you’re training an image-based AI and you’re using mass scraping for that.
03:55:12.000 –> 03:55:19.000
Possession of child sexual assault material is a strict liability crime. If that’s in your data set and you’re not curating your data set.
03:55:19.000 –> 03:55:34.000
You may go to jail for that, and you should. There’s also issues of like taking in conspiracy theories and accurate information, blackmail information, stuff that was used for doxing, things that were put on the internet due to a data breach or leaks.
03:55:34.000 –> 03:55:41.000
All of that going into a training data set has a lot of different concerns about liability and what laws apply in those cases.
03:55:41.000 –> 03:55:47.000
And then if you’re building an algorithm off of that training data, if you yourself are dealing with a data set that is so large.
03:55:47.000 –> 03:55:51.000
That you can’t reliably say what is or isn’t in it.
03:55:51.000 –> 03:55:58.000
You also can’t necessarily say that your algorithm is being built in an accurate, unbiased, fair way.
03:55:58.000 –> 03:56:03.000
And so at that stage where the algorithm is being constructed from the training data, a whole other set of legal issues come in.
03:56:03.000 –> 03:56:11.000
And then there’s the use of data, the marketing and the sale. There are some AI systems and autonomous agents that are intended for very specific purposes.
03:56:11.000 –> 03:56:21.000
Those tend to be a little more curated. They tend to have more checks on them. They tend to be more clear about this is a proper use of this system. This isn’t a proper use of this system.
03:56:21.000 –> 03:56:31.000
With those parameters, it’s a little easier for AI companies to say we’re doing our due diligence, we’re being responsible, we’re making sure we’re doing this in a good way.
03:56:31.000 –> 03:56:51.000
For general use models where there isn’t really a specific area that they’re looking at application, that also is very hard because then you enter into questions about Is this unintended use? Is this an easily expected use, even if it wasn’t the stated intent of the company, is it very obvious that it would probably be used for this purpose?
03:56:51.000 –> 03:57:04.000
And some of those go into areas where you’re looking at image generation, where you can substitute your face into a an image, so you look like a celebrity in their dress at the Oscars or something.
03:57:04.000 –> 03:57:15.000
Also really easy to use that technology for like black male and pretty horrific purposes for individuals. And that’s a pretty expected use of that system that shouldn’t take anyone by surprise.
03:57:15.000 –> 03:57:32.000
So when we’re looking at things like liability, responsibility, use of data, frankly, even trade secrets, because there’s many models where what you plug in in the prompts is likely being put into the training data and then it’s trained on that. So if you’re asking a query that has
03:57:32.000 –> 03:57:45.000
Trade secret information, sensitive information, that now is in this mass data set and you have very little control over who may have access because there have been lots of tests that show that in at least certain models, if you do the right
03:57:45.000 –> 03:58:01.000
Prompt injections, you can get access to raw training data But also it may come out in algorithms, it may come out in outputs. You just have very little control over all of the uses or possible sale of data sets to other companies that then are
03:58:01.000 –> 03:58:19.000
More and more uses. Thinking of this in terms of where does the law apply at each stage of a development, I think also may be helpful in looking at the the enormous scope of laws that may apply in different areas. Yeah, I think it’s a great point and it kind of highlights
03:58:19.000 –> 03:58:39.000
The limits of the EU AI Act which was largely written before the era of modern large language models in that earlier narrow era, mostly AI models had specific uses and you could, so they developed a risk-based approach where they said, we pretty much know what this is going to be used for so we can predict
03:58:39.000 –> 03:58:46.000
This REI system is being used for hiring, we can worry about the risks that come from hiring in medicine.
03:58:46.000 –> 03:59:00.000
But now we’re in this era of general purpose systems that can be used for anything. It depends what the end user uses it for us. You can’t really predict the risk. And this is sort of a governance challenge for the models. Paul.
03:59:00.000 –> 03:59:05.000
Yeah, so I was going to say that I somewhat agree with Cali, but also somewhat disagree.
03:59:05.000 –> 03:59:18.000
So we’re at the era right now, and I agree in the sense of the content that’s trained. There should be some responsibility. But at the end of the day, large language models is basically an algorithm.
03:59:18.000 –> 03:59:23.000
The arc of what is the next pattern or behavior of words.
03:59:23.000 –> 03:59:25.000
It’s a sequence of words. That’s really what it comes down to.
03:59:25.000 –> 03:59:41.000
What we’re seeing, at least with DeepSeek, with the 671 billion parameter model, it’s got some cohesive thinking workflow. Now, just here in the last 24, 48 hours, Alibaba came out with their 37 billion parameter model.
03:59:41.000 –> 03:59:46.000
So my point here is, so it’s almost neck and neck in terms of performance.
03:59:46.000 –> 03:59:59.000
So one is really sits on a server, server on the back end that we get to utilize, but we don’t get to see. Now, 37 billion parameter gets and be kind of interesting.
03:59:59.000 –> 04:00:04.000
It’s not quite in our hands yet, but it is getting there.
04:00:04.000 –> 04:00:10.000
Which also means that the power of what could be on the server could be in the hands of our mobile devices.
04:00:10.000 –> 04:00:17.000
And my best analogy to that is think of what we did in the, let’s say, the modem days when you’re doing dial-up.
04:00:17.000 –> 04:00:23.000
Versus what all of us experienced today in faster speed connection.
04:00:23.000 –> 04:00:31.000
We know that in the dial-up modem, you can’t shrink. There is no possible way that Netflix can exist in that dial-up modem world.
04:00:31.000 –> 04:00:45.000
So the transformation of quicker more efficient system, in this case, delivery of content through the internet Now we have streaming services. Youtube exists. Now a lot of streaming TV services exist.
04:00:45.000 –> 04:01:02.000
The law has not been able, in that specific scenario, I believe there was a case between Netflix buying DVDs of Disney and using that copyright streaming data, some sort of copyright there. And you guys would know a lot more about that.
04:01:02.000 –> 04:01:05.000
It just so happens that I’m kind of bringing this up.
04:01:05.000 –> 04:01:14.000
So going back to that 671 billion parameter DeepSeq model versus what just came out.
04:01:14.000 –> 04:01:28.000
The power of going into a cell phone will create new opportunities and new designs for products that we can’t even imagine today, like we could not imagine streaming video back when we had dialed modems.
04:01:28.000 –> 04:01:41.000
Yeah, this is a great point. And right now we’re living in a paradigm where the best models are in giant data centers far away and we don’t have access. But as I said in the keynote, open source and open weights are quickly
04:01:41.000 –> 04:01:58.000
Catching up. They’re only about six months behind and they’re getting smaller, more efficient. The models that are 37 billion today are as good as the models that were 175 billion a year ago due to algorithmic efficiency. So we may be in a world where
04:01:58.000 –> 04:02:12.000
A lot of the AI systems that we’re using are local and can be inspected and are not kind of what maybe how they’re trained might be still trade secrets, but the model themselves might be inspectable locally. It’s hard to know, and that is really
04:02:12.000 –> 04:02:17.000
Interesting. And I think it raises a larger question, which is.
04:02:17.000 –> 04:02:24.000
How do we possibly regulate a technology that’s moving this quickly? And this is kind of the looming question.
04:02:24.000 –> 04:02:38.000
One general question that you mentioned earlier, Paul, which is. How do you even define something sensibly in legislation? We’ve had trouble talking about artificial intelligence in legislation.
04:02:38.000 –> 04:02:46.000
Distinguishing it from normal automation. There’s not really a fine line between automation and artificial intelligence.
04:02:46.000 –> 04:02:56.000
Of course, we’re going to similarly struggle to talk about AI agents and maybe digital twins And so the law has a definition problem. The law has a timing problem.
04:02:56.000 –> 04:03:01.000
Problem. How do we keep up with this? You know, the EU AI Act was out of date.
04:03:01.000 –> 04:03:17.000
The day it passed by almost everybody because they missed largely missed the boat on large language models because it was developed and we have an expertise problem You know, it was already hard to have expertise in the government around this fast-moving technology. If you fire everybody.
04:03:17.000 –> 04:03:21.000
It’s going to be a lot harder. So let me throw these.
04:03:21.000 –> 04:03:48.000
Difficulty questions out to the audience, to the panel. Since no one’s done it fully yet, I’m going to defend the AI Act for a moment. But I’m going to defend it on on slightly different grounds and also it makes for a much more fun panel if there’s that. But there’s truth to what I’m suggesting. Even if a regulation is not capable at the time that it’s written.
04:03:48.000 –> 04:04:07.000
To anticipate and regulate all known or unknown results, that doesn’t mean that looking at these issues from a regulatory standpoint early on and regulating has no value. The value of it, however, I think is more about signaling, and I’ll go back to notice.
04:04:07.000 –> 04:04:24.000
To the extent that regulators are in a position where putting aside the expertise issues, the technology is changing on a day-to-day basis. I mean, I’m sitting here with two devices because at any moment, something else could happen. Well, it looks like my argument’s out the window of X.
04:04:24.000 –> 04:04:32.000
We can’t expect legislators at the same time that we want them to wait, or we think, let’s see what happens to wait forever.
04:04:32.000 –> 04:04:39.000
And I think because, in my view, because the technology was foisted upon society more or less.
04:04:39.000 –> 04:04:45.000
By Sam Altman, who unilaterally decided that the world was ready for this technology.
04:04:45.000 –> 04:04:49.000
It’s hard for me to look at legislators who are attempting to say, hmm.
04:04:49.000 –> 04:05:02.000
This is a real thing that has real uses for trying to get ahead of it, at least to the extent of identifying where policymakers are concerned. And so to the extent that the AI Act, and I know I’m simplifying it.
04:05:02.000 –> 04:05:08.000
Focuses on harm levels, right? And recognizing all of the things that were said which are true.
04:05:08.000 –> 04:05:26.000
I think it was actually quite valuable that the EU took the position of saying this is where the continent, right, sees potential risk because it signals, albeit imperfectly, right? It signals to those that are creating the technology that there is some watchful eye
04:05:26.000 –> 04:05:45.000
Around these issues. Now, does that mean the problem’s solved? No. Does that mean that the law or regulation will be not only effective but even will be applicable to a given situation? No. But what it does suggest, and the same issue has arisen in the context of discussing ethics.
04:05:45.000 –> 04:06:07.000
Is that there is some degree of public discussion around what we want this technology to be. My primary concern about the rollout of this technology as compared to the internet is that in my view, outside of circles like this, there’s been no public discussion and there was no public discussion of it until suddenly it was available to people more or less at the drop of a hat.
04:06:07.000 –> 04:06:29.000
That’s a major problem and that’s a flaw of the regulatory state, which is a topic that we could also get into. But I would say the AI Act at that level has actually been quite valuable, if not a model for other states to use. Yeah, those are great points. There are sometimes ancillary benefits to the discussion and thinking about it. And to be clear, I am not saying that
04:06:29.000 –> 04:06:44.000
Having public discussions or regulation. I’m just pointing out that it’s hard given in the moving technology. Well, Harry, I just want to create conflict here. So I’m going to disagree with that. No, I disagree with you.
04:06:44.000 –> 04:06:59.000
Yeah, I’m the one that raised the AI act first, so I feel like I should also clarify a little bit what I meant there. My criticism of the AI Act isn’t, I agree with you that I actually think it was really interesting that they put together a risk-based structure.
04:06:59.000 –> 04:07:11.000
Very novel thing globally to say there is a tier of use of these technologies that just flatly aren’t allowed. You cannot use them in these ways because we’ve decided cost benefit, risk analysis.
04:07:11.000 –> 04:07:20.000
Way too risky, can’t do it. That’s not something you see in a lot of regulations and it’s pretty novel and it’s a really interesting approach that they’ve taken there.
04:07:20.000 –> 04:07:36.000
A lot of the quibbles with the AI Act are that it’s outdated compared to where the technology is, which fair technology in many cases operates on a move fast break things get it on the market, ask permission, deal with the legal issues later structure. Not all technology.
04:07:36.000 –> 04:07:42.000
But again, I worked with some clients. That is a mindset that is prevalent, especially in the tech factor.
04:07:42.000 –> 04:07:53.000
Or tech sector. And so part of the problem is that the way you pass laws is through a bureaucratic process. It is through a back and forth process, especially in the EU. God love them. I’ve been working with them forever.
04:07:53.000 –> 04:08:03.000
I still cannot fully understand all of the different levels of bureaucracy that they have to move through repeatedly for every single bill. It takes a long time because of the structure of government.
04:08:03.000 –> 04:08:13.000
So law is never going to, at a speed level, be able to compete with how quickly things develop. We just have to acknowledge that that is a gap we’re not going to be able to bridge.
04:08:13.000 –> 04:08:20.000
However, just because you can’t necessarily beat a technology to being released, being on the market.
04:08:20.000 –> 04:08:31.000
That doesn’t mean you can’t regulate it. There tends to often be this mindset that if something like the genie’s out of the bottle, the toothpaste is out of the tube, you can’t put it back. It’s out there now, it’s done.
04:08:31.000 –> 04:08:46.000
And I agree to some extent that like AI is not going to go away. We’re not in a position where someone or some country is going to say, AI, we’re just not a fan of it anymore. You can’t build AI. You can’t use AI. That’s not going to happen.
04:08:46.000 –> 04:08:49.000
Frankly, I don’t think it should. There are some really beneficial uses of AI.
04:08:49.000 –> 04:08:54.000
But I compare it a little bit to the way cars were developed in that like.
04:08:54.000 –> 04:08:58.000
Cars were on the market and sold and driven all around.
04:08:58.000 –> 04:09:03.000
And for years and years and years before we mandated safety features in them.
04:09:03.000 –> 04:09:10.000
They were being used a lot. People saw that, hey, people are dying in these accidents. You’re going at really high speeds. There’s a bunch of damages happening.
04:09:10.000 –> 04:09:17.000
Now, legally, you have to have seat belts and airbags in cars. You have to have regular checks. You have to make sure that they’re meeting these standards.
04:09:17.000 –> 04:09:25.000
It’s possible for us to go into a technology that is already widespread and say, hey, these are consistent harms and risks that we’re seeing.
04:09:25.000 –> 04:09:40.000
These don’t seem to be going away. We need to build some safety features in here. And we’re not saying you can’t use the technology. We’re not saying you can’t come up with novel uses or build new things, but we’re saying you have to do it with these understandings of human risks and impact in mind.
04:09:40.000 –> 04:09:49.000
And I think that in general with AI, there’s this belief that AI can be used to address a lot of societal or systemic problems.
04:09:49.000 –> 04:10:10.000
And we’re seeing that in many cases. It’s a new approach, but it doesn’t solve core issues or core inequalities. Like, again, sorry, I just keep cycling back through all these arguments but But when we’re looking at training data, even if you’re trying to build a very neutral system, it’s very hard to do that when you’re using historic data.
04:10:10.000 –> 04:10:14.000
Because history has a lot of bias and inequality baked into it.
04:10:14.000 –> 04:10:21.000
There were years and years where like women were not legally allowed to open their own lines of credit. They had to have male relatives sign on to that.
04:10:21.000 –> 04:10:26.000
So if you’re feeding historic data into a finance or loan approval system.
04:10:26.000 –> 04:10:37.000
It’s going to say, hey. Men are more safer bets to approve for loans because look at how many more men in our data set have gotten lines of credit than women.
04:10:37.000 –> 04:10:42.000
That’s not reflecting the context or the historical context or the setting of the time.
04:10:42.000 –> 04:10:45.000
But a machine doesn’t know that. A machine’s taking in raw data.
04:10:45.000 –> 04:11:02.000
So in general, I think building in parameters that are saying things like you do have to curate data sets, you do have to test them, you do have to check the outputs and make sure there’s not inequality there. You have to make sure it’s not denying people opportunities. You have to make sure it’s not interfering with rights that we’ve established people deserve to have.
04:11:02.000 –> 04:11:12.000
That’s not meant to stifle innovation. You can absolutely innovate within safety parameters. It’s just meant to say you have to have some protections built into this.
04:11:12.000 –> 04:11:18.000
Yeah, and just provide a little background for the audience, because we’re talking about AI Act and a lot of jargons.
04:11:18.000 –> 04:11:31.000
The AI Act is in the European Union. It was largely written in 2021. Before ChatGPT, it started to go into effect last August.
04:11:31.000 –> 04:11:48.000
2024 in the United States, we don’t really have comprehensive AI legislation at all at the federal level. There is a little bit of legislation in a couple of states, including Colorado here and there in California. But by and large, we don’t have a similar
04:11:48.000 –> 04:11:55.000
Analog in the United States. Paul, did you want to weigh in? Yeah, sure.
04:11:55.000 –> 04:12:08.000
I’m looking at it from a technology perspective and legislation and how regulation can either keep up or not. I’m leaning towards more of no than yes. And I’ll give you another example.
04:12:08.000 –> 04:12:17.000
In video and also MP3s when music was an issue from going from a disk format to a digital format.
04:12:17.000 –> 04:12:27.000
There was trading of effectively, especially, I mean, I was guilty here at CU, downloading music that was obviously not legal.
04:12:27.000 –> 04:12:40.000
That then, those software programs then led to other technologies that made it really questionable whether someone could be held responsible or not.
04:12:40.000 –> 04:12:52.000
So the technology that I’m referring to is called Torrance. And torrents are basically a fragmentation of a file. So if I don’t own the full file of the music file.
04:12:52.000 –> 04:12:56.000
And just own a sliver of it. And I share it with someone.
04:12:56.000 –> 04:13:03.000
Am I breaking copyright law? Because that one sliver is basically, it’s useless in itself.
04:13:03.000 –> 04:13:09.000
But in the collective whole, in the sum, then it becomes an actual file that’s very much usable.
04:13:09.000 –> 04:13:15.000
I think in legislation, in how things are regulated, at the core, it’s a process, it’s a workflow.
04:13:15.000 –> 04:13:23.000
If you do this, then these are the results. What I’m seeing with AI is there are always, I say always.
04:13:23.000 –> 04:13:33.000
A lot of times there will be workarounds, either VPN workarounds or bits and pieces of data that then get combined to get to produce a certain result.
04:13:33.000 –> 04:13:45.000
I think it would be very hard to regulate. Okay, so that’s another challenge in regulation is just the technological ability, even if you have a law on the books.
04:13:45.000 –> 04:14:01.000
And you solve all the definitional problems. And the pacing problem, how do you operationalize it in a way that’s effective? It’s a great point. Yeah, I mean, you know, you pointed out in your keynote Harry, what is the fundamental challenge associated with regulating it, which is that
04:14:01.000 –> 04:14:13.000
Experts can’t explain the outputs. Even when they have access to the code, right? I mean, that to me is, at least with regard to code-based technology.
04:14:13.000 –> 04:14:23.000
A relatively new problem. If you look at things like voting machines or breathalyzers or even more recently, it’s not code-based, but COVID vaccine developments.
04:14:23.000 –> 04:14:46.000
And I’ve done work in the trade secret space throughout, right? You could explain what the process is for creating a vaccine with access to the know-how, the trade secrets, right? You could explain or identify whether and how a voting machine is going to tabulate a vote based upon access to the code. But we’re dealing with a strikingly different technology here, in my view.
04:14:46.000 –> 04:14:52.000
When those who have the expertise, computer scientists, the coders themselves.
04:14:52.000 –> 04:14:58.000
Are asked, why did this output occur? And putting aside questions of prediction.
04:14:58.000 –> 04:15:03.000
Or anthromorphizing or anything else, they say, we don’t really know.
04:15:03.000 –> 04:15:17.000
To then expect regulation to be able to handle all of the intricacies there, I think is too high a bar. It raises the question, going back to the law of the horse, I guess you have baited me there, to look at where else we might find the regulation.
04:15:17.000 –> 04:15:33.000
I hesitate to say that the marketplace to some degree will regulate it. Of course, to the extent that we have examples of that in recent memory, people use the internet because it benefits them, not complicated. And they will likely continue to use AI to the extent that it benefits.
04:15:33.000 –> 04:15:38.000
But maybe it’s going to turn oddly on things like social norms.
04:15:38.000 –> 04:16:03.000
Maybe it’s going to turn on issues associated with what human beings actually want in order to be human. Do we want to interact with the technology in a way that devolves what is fun to be human in the face of efficiency? And we tend to move towards efficiency as a general matter, whether it’s in the judiciary or in life. So it might be that, we don’t know, I mean, this is like crystal ball futurism, which I express no
04:16:03.000 –> 04:16:11.000
Confidence in, but I’ll just say social norms might do it. Of course, at the end of the day, to the extent that code can be manipulated.
04:16:11.000 –> 04:16:25.000
It will fall to the creators of it. And so I am hopeful to some degree, as you pointed out, that with open source modeling becoming more available, there’ll be more competition outside of that space to allow for the greater good to develop.
04:16:25.000 –> 04:16:33.000
Yeah, no, I love this point. So, you know, related and earlier technology that’s obviously causing problems is social media.
04:16:33.000 –> 04:16:48.000
And the law has not been effectively able to regulate it, not because the genie is out of the bottle, because nobody really knows what to do. And social norms are starting to push back on this for just for example my daughter
04:16:48.000 –> 04:17:12.000
High school Boulder High banned cell phones in school. That’s a social norms type solution where law was inadequate. So I think that’s a great point that might be where we go with AI as well. And just very quickly, we see that debate happening with Section 230, right? Which for those who aren’t familiar, right, is the law more or less that allows social media platforms
04:17:12.000 –> 04:17:37.000
To not be liable for defamation so long as they’re not the ones defaming, right? And then we see 230 more, and I will be candid that I’m a supporter of Section 230 Primarily because he created those platforms. Well, we fast forward now 20 odd years, right? We see sex trafficking, we see misinformation, we see disinformation, we see a lack of filtering, and it causes us to re-examine
04:17:37.000 –> 04:17:43.000
The nature of that foundation. Is it speech that we’re looking to protect or is it some other societal value?
04:17:43.000 –> 04:18:00.000
And that’s a debate which is happening right now with Section 230, a law that was created before social media, more or less, just as the AI Act more or less was created before, right, we have generative AI. And so paying close attention, I think, to the 230 debate
04:18:00.000 –> 04:18:09.000
And paralleling it, which is something I’m working on now, to where we are now might give us at least some guideposts, or as Victor Mayer Schoenberger or as Gosser call it, guardrails.
04:18:09.000 –> 04:18:21.000
For determining how we put this together. Yeah, that’s a great point. And just one comment before I toss it over about the interpretability of AI models. So you’re quite right.
04:18:21.000 –> 04:18:30.000
We talked about this yesterday, that today AI models are not interpretable. I myself am optimistic in the five to 10 year term.
04:18:30.000 –> 04:18:50.000
Due to research that I’m seeing on something called mechanistic interpretability and that more or less often involves using other AI systems to interpret how original AI systems are working. I’m optimistic that it’s not going to always be the place where it’s a completely black box and we
04:18:50.000 –> 04:19:02.000
Can’t tell the way we can’t tell today why AI is producing the systems. I’m seeing progress. I think it’ll be a different story in five to 10 years. But right now we are in that world. You’re correct, Chris.
04:19:02.000 –> 04:19:19.000
Yeah, one quick thought I had on the market argument is I agree with you that that’s one of the main factors in moving these discussions and these issues when like law is slow to catch up, but sometimes Companies respond more when the market is reflecting things.
04:19:19.000 –> 04:19:29.000
Sometimes. Yeah. One of the constant things that comes up in AI debates is that AI is such a boom right now. It is such a buzzword. It is so popular.
04:19:29.000 –> 04:19:35.000
That it is being embedded in tons and tons of products and services with no way to opt out of it.
04:19:35.000 –> 04:19:54.000
So the recent example was my office was testing a bunch of online web search sites because we’re so sick of Google having no option for you to turn off the automatic AI generated answer at the top of the page. I want that gone. I hate that thing.
04:19:54.000 –> 04:20:07.000
It drives me insane. But in many other search models, it’s also embedded in there. It’s become like an industry-wide thing where LinkedIn is using your posts to train its own AI and it has AI factors built into it now.
04:20:07.000 –> 04:20:12.000
Facebook meta obviously has a lot of AI things going on.
04:20:12.000 –> 04:20:34.000
A problem that I’m seeing in the AI market right now that is a similar problem you see in privacy law, which is my primary thing. It’s just a lack of choice and a lack of agency. I don’t think people would be as frustrated with AI everywhere if there was a little opt-out slide you could click to take that option away sometimes so you felt like you had some level of control.
04:20:34.000 –> 04:20:46.000
But the market issue that I’m seeing is that in polls all over the place, people overwhelmingly say that they do not want AI in a lot of these systems and they do not want their information going into AI and they do not want
04:20:46.000 –> 04:21:08.000
Xyz thing, or they want to know what it’s being used for. And we’re really not seeing companies respond to public pushback or outcry in a lot of ways. And so currently it’s looking like similar to how law is not as agile and responsive as we’d like it to be. The market also is not being very agile or responsive to
04:21:08.000 –> 04:21:20.000
At least providing options. Wouldn’t it be nice if we had opt-in? What a wild idea. What a wild idea. I would love that so much. It would make my job so much easier.
04:21:20.000 –> 04:21:30.000
So one of the biggest questions, and I’d love to hear your thoughts on this in this new world of autonomous agents, they can go out, take actions on your behalf using tools.
04:21:30.000 –> 04:21:36.000
What happens if they do something wrong? Who is liable? What happens on the nice?
04:21:36.000 –> 04:21:42.000
Okay, so let’s just imagine you send them out to buy X or Y and they buy Z.
04:21:42.000 –> 04:21:55.000
Instead or a million dollars worth of Z when you met 1,000, do we have… Questions for that. On the worst end, what happens if, again, without you, the user meaning to It goes out and does something bad.
04:21:55.000 –> 04:22:00.000
On your behalf? Do we have thoughts about liability? Do we need new laws?
04:22:00.000 –> 04:22:13.000
Any thoughts? My thought is that the service provider that you subscribe to really should take at the core, the responsibility for that action.
04:22:13.000 –> 04:22:27.000
So in our case, if we don’t initiate the return and all of a sudden we did not act on our consumer’s behalf to return the $50 item, I think we should absorb that cost on behalf of the user. And that’s just part of our offering.
04:22:27.000 –> 04:22:37.000
Now, I think some other companies may view it differently where that’s part of the service. There’s a margin to error, but I think for us, we should take responsibility.
04:22:37.000 –> 04:22:49.000
And then just follow up on that. What about kind of these open source agents when it’s not really a company It’s somebody using it. Do you have any thoughts about that?
04:22:49.000 –> 04:22:57.000
Use at your own risk. And maybe the user themselves? The user has to take responsibility. Okay, yeah.
04:22:57.000 –> 04:23:01.000
Interesting. Seems to make sense. Callie, do you have any thoughts about this?
04:23:01.000 –> 04:23:14.000
Yeah, I mean, just with the background that I am very much a lawyer and not a technologist, although I love getting to talk with technologists, so I don’t sound like a moron while I’m talking about how to regulate things.
04:23:14.000 –> 04:23:21.000
From my perspective on existing laws and the way that liability is factored out now, there are some possible approaches we could have here.
04:23:21.000 –> 04:23:36.000
As Paul mentioned, I think huge difference between open models that can be modified and adjusted by the user to a pretty high degree versus models where the developer has most of the control over how it weighs things and what it develops and how it
04:23:36.000 –> 04:23:42.000
Produces outputs and actions. In an agentic area.
04:23:42.000 –> 04:24:02.000
So if we’re looking at I’m reverting to torts liability, and Professor Stoden was my torts professor, so I feel like I’m getting cold called a little bit. You’re doing great. But if we’re thinking of it from like a percentage of liability and assigning liability, not necessarily that the developer is
04:24:02.000 –> 04:24:07.000
All the way liable or not at all liable and the user is all the way liable or not at all liable.
04:24:07.000 –> 04:24:12.000
If we split it more into percentages and we can evaluate systems that way.
04:24:12.000 –> 04:24:26.000
To me, that feels like it makes more sense from a liability perspective where if the developer has much more control And the user essentially is taking what they think is a ready-made product and just plugging it into use models
04:24:26.000 –> 04:24:34.000
The developers should have much more liability in that case. They have much more control over what’s happening and much more responsibility to do the testing and make sure it’s working properly.
04:24:34.000 –> 04:24:41.000
If there’s something where they’re building a framework or it’s something where like model weights can be adapted by the user.
04:24:41.000 –> 04:24:50.000
Then it’s a little different because you may be modifying it to a degree where whatever happens is because of the way you modified it. I understand why liability shifts a bit there.
04:24:50.000 –> 04:25:05.000
The one thing that I really hope doesn’t happen is shunting all of the responsibility and liability to a purely like, well, you chose to use it. So that’s all on you model.
04:25:05.000 –> 04:25:15.000
We see that a lot in privacy too, where you chose to use this product. So if you don’t like how it’s using your information, don’t use that product. If you don’t like how an AI agent is doing things for you, don’t use that product.
04:25:15.000 –> 04:25:21.000
In some cases, completely valid argument. You can say, okay, then we shouldn’t use that product anymore. We don’t like it.
04:25:21.000 –> 04:25:37.000
But because of the way technology is embedded in so many aspects of life now, that’s not always an option. So it’s like, for example, there are some products I really don’t like their privacy practices. I don’t want to use them. I have to for work.
04:25:37.000 –> 04:25:48.000
I had to use them for school. I can’t not use Microsoft products because that’s how I build things and communicate. I can’t not use any Google products because then the internet breaks.
04:25:48.000 –> 04:26:02.000
I can’t avoid a lot of things because they are so embedded and they’re so embedded in like workforce and education and social decisions that if I pull out of that, I’m also removing myself from pretty major aspects of life.
04:26:02.000 –> 04:26:06.000
And I don’t think autonomous agents or AI agents are at that level yet.
04:26:06.000 –> 04:26:14.000
But based on how quickly AI has been embedded in so many systems, I don’t think it’s a fair argument to say.
04:26:14.000 –> 04:26:17.000
Just don’t use it then when in some cases it’s not really avoidable.
04:26:17.000 –> 04:26:30.000
And again, AI autonomous agents aren’t at that level yet, but we don’t know if They won’t be soon. Everything is moving so quickly that I see that as being kind of a non-starter argument when we’re talking about liability. That makes sense.
04:26:30.000 –> 04:26:51.000
Dave? Yeah, I mean… on the issue of liability, I do think that existing law more or less can address it. I mean, I look at this from the standpoint of copyright liability for linking, for example, which was a big issue. Well, it’s still an issue, but a big issue 10 or 15 years ago.
04:26:51.000 –> 04:27:02.000
You can use agency law, you can use tort law, you can use contract law, questions of disclosure, privacy, transparency to get to it. To me.
04:27:02.000 –> 04:27:10.000
That’s a relative, I mean, it’s an important question, but it’s a relatively easy question to work through compared to some of the other things we’re talking about today.
04:27:10.000 –> 04:27:18.000
Now, it changes when you engraft professional responsibility standards onto the question of liability. And so speaking, as I am in a law school.
04:27:18.000 –> 04:27:41.000
To people who are either going to be lawyers or are lawyers or, and the hardest people of all dealing with lawyers, and my apologies for training lawyers, but we try to do a good job with this. Our ethical responsibilities with regard to the use of the technology changes the math dramatically. Every state, every state has basic rules with regard to ethics that indicate quite clearly that lawyers
04:27:41.000 –> 04:27:49.000
Must at least reasonably understand whatever it is they’re doing, including the technologies that are used.
04:27:49.000 –> 04:28:06.000
Now, I mean, I’ve already talked about the challenges of understanding the technology, but those ethics rules are clear. So to the extent that lawyers are willy-nilly, which is a legal term, by the way, if you’re not familiar, using this technology without at least checking the sites.
04:28:06.000 –> 04:28:21.000
That goes beyond understanding technology to just basic standards. And of course, the notion that a lawyer prior to generative AI would put fake sites into a filing in court is insane. I mean, you just assume they wanted to end their career or seriously that something was very wrong with them.
04:28:21.000 –> 04:28:36.000
And now it’s something that happens once a month and I figure will continue to happen because of the slow uptake on this stuff. But make no mistake, lawyers are held to a standard of saying, forget about agency. You’re not going to blame OpenAI or, you know.
04:28:36.000 –> 04:28:45.000
Co-counsel on Lexus or whatever it is you’re using for the mistake you make because we’ve engrafted professional ethics on it. The same thing should happen and is happening in the medical field.
04:28:45.000 –> 04:29:15.000
So I think as far as that goes, we need to separate the policed professions and particularly the self-policed professions from everyone else when we think about the agency issues and we think about that liability. And from that perspective, to me, the fundamental challenge, again, I’m a broken record here, which is an older technology, but I’ll use it anyway, is a question of whether we can even understand it in the first place. I’m pleased to hear And I’ve read this, and I am like you, Callie, I am not a computer scientist. So I defer to Harry on all of that stuff, which is part of why I work at the Harry site and from the fact that you’re a charming person, is the notion that the technology might get there.
04:29:24.000 –> 04:29:33.000
But to the extent that we’re using it right now, that lack of understanding, I think, outside of the profession creates the challenge as opposed to the law itself.
04:29:33.000 –> 04:29:49.000
Yeah, this is a great point. And it’s worth emphasizing that the systems that lawyers may be using today from Lexus Plus Protege or Westlaw co-counsel or actually mini agentic systems, right? You give them a high-level goal. You might say, find me
04:29:49.000 –> 04:30:09.000
Case law about such and such legal issue and it tries to understand users requests, it comes up with a search plan. It has a curated legal database of cases and laws that lawyers use performs its own search, grabs the relevant documents that it itself believes are relevant.
04:30:09.000 –> 04:30:17.000
And then it analyzes the relevant documents it’s gotten to produce an answer. All of those or agentic, and that’s new-ish.
04:30:17.000 –> 04:30:23.000
Technology that couldn’t have existed. But as you said, you have to double check everything, right?
04:30:23.000 –> 04:30:45.000
Some even put aside fake case sites, which still happen but they’re real. So a case site, as many of you know, is When lawyers are making arguments, you say, oh, here is this earlier case that was decided, which supports my argument. And you, Judge, can go look it up.
04:30:45.000 –> 04:31:08.000
If you want to see that an earlier court has supported a position similar to mine. So a lot of what these AI systems do, especially the older ones, hallucinated, they would make very plausible sounding cases. They looked real. They sounded real, and they would support the lawyer’s argument very strongly. The only problem was they weren’t real cases. So when the judge
04:31:08.000 –> 04:31:17.000
Went to look at like, oh, okay, you’re about to win, but maybe I should check look at that case more directly. They looked it up. It turns out that case didn’t exist.
04:31:17.000 –> 04:31:35.000
That’s happening less and less, although it’s still a problem. Actually, a more subtle issue that is happening now that still exists is It’s a real case. The AI system reads it and provides a slightly incorrect interpretation of what that actual case
04:31:35.000 –> 04:31:50.000
Meant in a way that favors the lawyer, which is equally bad because you want to be Under ethics rules, you have to be very upfront. Even if a case is not strongly supporting your position, you have to be honest
04:31:50.000 –> 04:31:55.000
About the representation to the judge and misrepresenting it is a big ethical problem.
04:31:55.000 –> 04:32:01.000
Great points. Paul, did you want to weigh in on that?
04:32:01.000 –> 04:32:07.000
All right. So we’re at the point where I want to open it up to audience questions for the panel.
04:32:07.000 –> 04:32:14.000
And we have traditionally, as I said, start with a student question. So do we have any students in the room?
04:32:14.000 –> 04:32:21.000
Who want to do the honor of asking a question.
04:32:21.000 –> 04:32:43.000
I just want to terrify all of you. Oh, I saw a student in the background. I was going to say when Phil did this, he would occasionally cold call. Yes, I trust our students, so… This one is for any of the panelists. So just thinking about the remarks on social norms, social media, as well as the light of some of the discussions
04:32:43.000 –> 04:33:00.000
Of the previous panel, I was wondering if any of you would have any advice to give to a parent or a teacher that’s trying to prepare a child or somebody else coming up of how to prepare yourself for a world in which AI is going
04:33:00.000 –> 04:33:15.000
You know, out there. While also protecting from maybe both emotional and just misuse dangers that have been discussed.
04:33:15.000 –> 04:33:20.000
Sure. So as I mentioned, I have 15 and 18 year olds.
04:33:20.000 –> 04:33:28.000
But for COVID, when they were forced to be online. They would not have had the vices.
04:33:28.000 –> 04:33:41.000
Until high school. I mean, now that’s easier said than done, but let me just start right there and I would include AI in the mix. That requires a level of parenting that many people for a variety of reasons.
04:33:41.000 –> 04:33:54.000
Very legitimate reasons and maybe not legitimate reasons don’t have the bandwidth to do. But Natalie Schull wrote a book years ago called Addiction by Design about gambling machines. And the same process applies in social media.
04:33:54.000 –> 04:34:09.000
Well established now. So step one is to recognize what is well established, that physiologically the child’s brain is simply not developed well enough to handle the complex decisions that are made with these devices.
04:34:09.000 –> 04:34:29.000
Putting that world aside, you know, the question really is how do we prepare our children for a world where they have devices at their disposal that render most, much of which I should say, of the kinds of basic processes that a child goes through for making decisions
04:34:29.000 –> 04:34:59.000
Secondary or obsolete. Sometimes we compare this technology to what I was in law school, which shockingly is almost 30 years ago, when shepherdizing by books was no longer done. And for those that aren’t familiar with that, it’s checking precedent to make sure it’s still good law, which you used to have to do, which is kind of important, which you have to do via books. And now you don’t do via books. This is a fundamentally different skill set because this has already been mentioned and I think well understood. To the extent that you have a technology that can summarize a case for you, what we call case briefs.
04:35:03.000 –> 04:35:28.000
Yeah, it does it pretty well. I think your work has established that and others have, Harry. But to the extent that we’re talking about nuance, context, and what have you, it’s not there, or at least not there yet. So step two would be encouraging and strongly, not only encouraging, but requiring your children to the extent that you can do this as parents or in public schools or in private schools to do things like, wait for it, read books.
04:35:28.000 –> 04:35:58.000
Media literacy, having a sense of what the sources are, which is a skill which is diminished For a variety of reasons. When I launched a radio show on Stanford Radio called Hearsay Culture 20 years ago about internet and technology because I was concerned that technology was going to make it difficult to determine truth from fiction. Maybe that was my one crystal ball prediction that turned out to be right, because I was also a cyber utopianism person who thought that we probably wouldn’t see some of the problems we have now. So that would be step two. I think step three is to go back
04:35:59.000 –> 04:36:05.000
To the fundamentals of talking to children about what are your dreams and what are your goals.
04:36:05.000 –> 04:36:16.000
To the extent that children are thinking about their goals as being a star on YouTube, which is nothing wrong with that per se, and there might be some good things.
04:36:16.000 –> 04:36:23.000
Going back to those fundamental questions of what is life to you and what does it mean is important. It’s important in those formative years.
04:36:23.000 –> 04:36:28.000
You know, my sons did not have access with, you know, my wife and I made that decision early on.
04:36:28.000 –> 04:36:52.000
And I think it’s benefited them. There are concerns that children, if they don’t have access early on, will not be able to use the technology well. Frankly, I don’t buy that. And as was mentioned on the first panel, schools are increasingly taking the position that they have to train children and adults Right. On the technology. So that’s where I’d begin. The last thing I would suggest is being realistic.
04:36:52.000 –> 04:37:15.000
About why it is that we have technology in the first place, right? The moment that we lose sight of the fact that the technology is supposed to make our lives better and not ours as in only the purveyors of the technology, but society as a whole, we can then address it. Now, these are hard questions to ask. Those aren’t great answers, by the way. I’m not thrilled with them, but that’s where it goes back to to me.
04:37:15.000 –> 04:37:33.000
And yeah, I think those are all great points. And a couple I want to double down on. One is the ever more importance of critical thinking skills And media literacy, I think one of the roots of our current political turmoil is the lack of media literacy and critical thinking and
04:37:33.000 –> 04:37:49.000
People listening now uncritically to untrustworthy facts or sources. So training our children to be really good critical thinkers when things don’t makes sense to look a little deeper. So I think that’s a great point. And that goes hand in hand with
04:37:49.000 –> 04:38:09.000
Media literacy. A second point, a lot of parents don’t want to hear this because being a parent is a busy job as it is, but making yourself AI literate is really important understanding Sitting down, playing with the AI systems, figuring out what they can do, what they can’t do, trying it again.
04:38:09.000 –> 04:38:28.000
A year from now, it’s to provide any advice or counseling to your children, you really need to understand what is the landscape that they’re growing up with and a lot of people understandably don’t want to have anything to do with AI. They’re afraid of it or they don’t like it. And I understand that impulse, but I also think
04:38:28.000 –> 04:38:39.000
You’re doing yourself a favor just to be familiar with it, even if you don’t adopt it in your own life, just to know what’s going on.
04:38:39.000 –> 04:38:52.000
I’m jumping here. So I also have a 15 and 18 year old. And we, I… believe, at least our 15-year-old I would like to encourage them to use actually more AI.
04:38:52.000 –> 04:38:58.000
Now he already uses ChatGPT on a regular basis for homework. He gets a fair review.
04:38:58.000 –> 04:39:07.000
But there’s a difference between different types of LLMs. Where I see, at least for me in viewing it out of our son’s eyes.
04:39:07.000 –> 04:39:16.000
Our son has a little bit of a learning disability, and I’m pretty sure I’ve never been tested, but I’m pretty sure I’m also dyslexic.
04:39:16.000 –> 04:39:22.000
Had I had the tools that I have now, and even still today where I’m at today.
04:39:22.000 –> 04:39:26.000
I’ve already, I believe I have 10x my own ability to learn.
04:39:26.000 –> 04:39:36.000
Now, the ability in having learning disabilities can only be improved or assisted through large language models.
04:39:36.000 –> 04:39:49.000
So in my case. I would love to encourage our 15-year-old to use it even more. But equally important is that human relationship, that human touch, that conversation at dinner.
04:39:49.000 –> 04:40:04.000
No devices. No phones, to have that connection. Yeah, I love that point. And as an optimist, I always want to emphasize the benefits of AI and learning. I’m very optimistic.
04:40:04.000 –> 04:40:11.000
That in some respects, AI is going to really help with learning in ways that you’ve just mentioned. So we don’t want to be too pessimistic.
04:40:11.000 –> 04:40:20.000
Here and focus only on the risks. We’re here, please.
04:40:20.000 –> 04:40:31.000
So at a fundamental level. The policies and regulations can discern between what’s right and wrong. So regardless of the pace of the technology.
04:40:31.000 –> 04:40:35.000
At which it is moving. Couldn’t it be evaluated based on what it is doing?
04:40:35.000 –> 04:40:44.000
Like in Paul’s case. It’s impersonation, even though it’s a digital twin, but it’s bringing no harm. It did something good. It increased productivity.
04:40:44.000 –> 04:41:00.000
On the other hand, if that impersonation was being used to Let’s say defraud somebody. That’s crossing the line, right? So couldn’t the law and regulations not make it as complicated and as hanging so heavily on the technology versus the outcomes on
04:41:00.000 –> 04:41:12.000
What it is producing and then start solving for complexities like what if AI’s output is an input to something else and therefore who’s liable and all those things can be worked out.
04:41:12.000 –> 04:41:29.000
So it’s a question for the entire panel. Yeah, no, it’s a great, that’s often known as use-based regulation, and a lot of people do advocate for that. I happen to think it’s a pretty good idea. One of the big issues that I don’t have a problem with, but you often have to wait and see
04:41:29.000 –> 04:41:34.000
What harms arise. And then figure out, ah, that’s.
04:41:34.000 –> 04:41:54.000
The new technological harm that we now want to stop. I tend to think that’s the best approach. There’s a lot of other harms that are just The same old harms in new form. If you steal money, paper money versus electronic money versus agentic money. A lot of the existing laws might cover it, but I’m curious what the other panelists think.
04:41:54.000 –> 04:42:04.000
I think use case models are a good approach in a lot of cases, particularly because this technology is something where the exact same technology can be used in wildly different ways.
04:42:04.000 –> 04:42:16.000
Some that are really beneficial and some that are harmful. A couple challenges with that that I don’t think are insurmountable. I do think it’s a decent structure, especially as we’re looking at how we’re developing these things.
04:42:16.000 –> 04:42:30.000
But a couple of challenges with it are one. There’s a lot of… cases where a technology has a very obvious harmful use case. It has other use cases too, but it has a very obvious harmful use case in how it’s built.
04:42:30.000 –> 04:42:42.000
Like, again, just going back to it because it’s an easy example but um voice modulation technology that can change a voice or generate an AI voice that sounds very convincingly like another person.
04:42:42.000 –> 04:42:47.000
That can be used for great beneficial uses like Paul’s app sounds really useful.
04:42:47.000 –> 04:43:04.000
It also can be used and is being used frequently in scams and to have For example, my mother got a call that was my younger sister’s voice sounded just like her saying that she was in trouble and needed my mom to send money right away.
04:43:04.000 –> 04:43:18.000
It sounded just like my sister. That’s a pretty expected use case from this. And I mean, we saw that in the election too. There was a case of recording going out to a bunch of people in New Hampshire that very
04:43:18.000 –> 04:43:29.000
Much sounded like Joe Biden’s voice telling them not to vote in the primary and it was not him but I think we have to look at easily predicted bad use cases.
04:43:29.000 –> 04:43:45.000
And try to prevent those preemptively before those things happen. Some of that could be in design, some of it could be in changing enforcement mechanisms, maybe making tweaks or additions to existing laws. Like there are laws on the books about scams and fraud and things like that.
04:43:45.000 –> 04:43:50.000
But we may need to modify them a little bit for AI use cases specifically.
04:43:50.000 –> 04:43:57.000
Another challenge is that we will sometimes have companies that argue, well, is that really a bad result?
04:43:57.000 –> 04:44:10.000
Then it gets into a very interesting debate of how exactly you define something as harmful because something that maybe is really frustrating and harmful to an individual may be monetarily beneficial to a company.
04:44:10.000 –> 04:44:22.000
Again, back in the privacy space, data brokers and data scraping and tracking you everywhere you go on the web That’s really frustrating to a lot of people. A lot of people would argue that is a harm to them and they don’t like it.
04:44:22.000 –> 04:44:38.000
There’s huge benefits to companies monetarily to keep doing it. Looking at cases where it’s not clearly bad for everyone makes that a little challenging, but again, not insurmountable. In those cases, often in the law, we’ll do balancing tests or we’ll do
04:44:38.000 –> 04:44:43.000
Cost-benefit analyses so possible, just a lot to work through.
04:44:43.000 –> 04:44:52.000
Yeah, and to be positive also as we close, in terms of going back to training children, I will tell you a proud moment.
04:44:52.000 –> 04:45:04.000
Very quickly with my older son prior to screen time, the app on the iPhones, there were some other third-party apps out there. So I installed it on his phone so I could monitor him.
04:45:04.000 –> 04:45:26.000
And about two days later, I started noticing that I was being monitored because he actually reverse engineered it on his phone to monitor me, which was a proud moment for me. I was annoyed. I was annoyed, but I quietly thought to myself, that is fantastic that you pulled it off. Being facile with the technology is helpful.
04:45:26.000 –> 04:45:39.000
And there are plenty of good uses. My wife is an English teacher in public schools in North Carolina in a Spanish immersion program. So it’s a dual language program.
04:45:39.000 –> 04:45:42.000
And to the extent that the public schools have the resources for it.
04:45:42.000 –> 04:46:05.000
Augmenting, augmenting teaching as opposed to replacing is the way to do it. Of course, that’s the fundamental challenge, right, of what are human beings going to do. On this issue of use cases, one other note I’ll just point out, and I think it may come up in the final panel. You start with, well, let’s ban the technology. Those arguments were made with regard to the internet and they’ve been made with regard to every new technology.
04:46:05.000 –> 04:46:15.000
Larry Lessig famously talks about John Philip Souza saying that the record player, the phonograph is going to end live music, right? And so those are visceral reactions.
04:46:15.000 –> 04:46:31.000
If we can pin down where those positive uses are, and if we can have a better understanding of what the capabilities of the technologies are that balance things out, the discussion can begin. Right now, I more or less think we’re throwing darts in a lot of ways.
04:46:31.000 –> 04:46:50.000
For reasons I’ve already mentioned. And because we’re throwing darts, which seems to be policy making writ large right now, it seems, at the federal level, we wind up in a situation where we have to see what happens. I don’t know that I agree with my friend Harry in terms of waiting to see, but clearly that’s where we’re going to be. I’m trying to pick another argument, Harry, by the way. We only have a few minutes left.
04:46:50.000 –> 04:46:59.000
I disagree. So let’s gather up. Two more questions. One in the back has been waiting patiently.
04:46:59.000 –> 04:47:11.000
Yes, we’re going to just ask your question and we’ll get a couple more and do a rapid fire. No problem. I’ll make this quick. To go back to the title of the panel, right? One thing I’m curious about is.
04:47:11.000 –> 04:47:23.000
We’ve been discussing in this panel. The sort of basic regulatory framework around emerging AI models, data, how do you use it? What do you do with it? What’s your data hygiene like?
04:47:23.000 –> 04:47:36.000
How do actual agents change the legal conversation and policy conversation? These are entities that are now operating as individuals in society separately, often from the those of us who push them into action, maybe even no one pushed them into action.
04:47:36.000 –> 04:47:41.000
I realize this is opening a whole different topic. And if you want to take it off, that’s fine.
04:47:41.000 –> 04:47:51.000
What’s the cutting edge of that conversation? The actual policy conversation around agentic AI and not just algorithms and what they do. That’s it.
04:47:51.000 –> 04:48:01.000
Okay, that’s a great question. Another question right here in the Green.
04:48:01.000 –> 04:48:10.000
So this is a question for Callie. I know Professor at Villanova, Brett Frischman, who’s an expert in, do you know, Brett, AI and law?
04:48:10.000 –> 04:48:34.000
And he’s introduced an amicus brief for the Supreme Court of Pennsylvania. His view is that we should introduce friction to basically mandate critical thinking to prove informed consent at the consumer level. So basically you’d get a little quiz that proves that you, so you could fail it. Basically, you’re given the basic terms of something similar, not like horrible boilerplate that nobody reads.
04:48:34.000 –> 04:48:45.000
On the basic aspects of privacy that they’re potentially giving up, then they have a little quiz on it and they could, in theory, keep failing it if they’re not actually thinking it through. I’d like to hear your opinion on that sort of thing.
04:48:45.000 –> 04:48:50.000
Okay, so we’ve got two questions. Rapid answers in our last minute.
04:48:50.000 –> 04:48:57.000
Question number one, what happens if we get to the moment where we have real autonomous agents out there? We’re not at that point. I don’t know.
04:48:57.000 –> 04:49:10.000
When that will be. But what if independent actors that are moving in society and can do everything a human can do. And then question number two, what about deliberately introducing frictions, knowledge-based frictions.
04:49:10.000 –> 04:49:15.000
To know your rights. Callie, do you want to take the first one?
04:49:15.000 –> 04:49:20.000
Yeah, I’ll rapid fire do both of them real quick. So the agentic question is a great question.
04:49:20.000 –> 04:49:28.000
Part of the struggle is figuring out whether this is a novel thing or whether there are areas of law that exist around it.
04:49:28.000 –> 04:49:35.000
In the event that AI agents are able to act semi-autonomously, you’re able to give them a prompt and they go out and do things for you.
04:49:35.000 –> 04:49:41.000
At a large scale. There are laws in place about like.
04:49:41.000 –> 04:49:52.000
Designating someone legally to act as your agent. And so it’s possible that we could fall under something like that structure where they are empowered with very specific rights and not rights.
04:49:52.000 –> 04:50:00.000
Different, sorry, legal terms, but very specific actions they are permitted to do on your behalf, very specific interactions they’re allowed to do.
04:50:00.000 –> 04:50:10.000
That should all be documented and spelled out. And then as long as they’re operating within those parameters, you’d be liable for what they do if they go wrong because you gave them very explicit instructions.
04:50:10.000 –> 04:50:16.000
And if they violate those instructions, then they, or in this case, because it’s a system.
04:50:16.000 –> 04:50:25.000
Whoever developed that system and released it, if they’re going outside instructions, then they would be liable for violating the terms of what you empowered them to do on your behalf.
04:50:25.000 –> 04:50:32.000
So that’s one possible structure we could have with agents, but everything’s squishy in law always.
04:50:32.000 –> 04:50:58.000
With the informed consent model. That is a really interesting perspective. We’ve had at Epic a lot of issues about consent when it comes to privacy. A lot of our privacy structure in the US is set up as being people take your information in a lot of cases and then when questioned on it, they say, oh, you can always opt out if you want. But first of all, you didn’t know that when they collected it and didn’t get any say then.
04:50:58.000 –> 04:51:09.000
And it’s often very hard to find an exercise and opt out. So informed consent is a much higher standard. I always love the idea of a higher standard of privacy and information control.
04:51:09.000 –> 04:51:17.000
I think challenges with that would be that there are a lot of uses where like at the initial intake of information.
04:51:17.000 –> 04:51:25.000
They’re saying they’re going to use it for one method and then once a company has it, they decide to use it for developments that maybe they didn’t even envision at the time they took that information.
04:51:25.000 –> 04:51:32.000
That’s a common practice now. I would argue it shouldn’t be. I would argue that you need to get or you should get new permission for every new use you’re using.
04:51:32.000 –> 04:51:48.000
An informed consent could be a way to do that, but I imagine there’s going to be very significant pushback from an industry level on that because that would essentially force them to restructure the way they’ve built their businesses. Okay, rapid fire reaction. Paul, to either of those?
04:51:48.000 –> 04:51:54.000
The short answer is I don’t know. And I’m just going to give some extreme examples here.
04:51:54.000 –> 04:52:10.000
So in our case, we’re doing outbound calls to customer service agents. We could do so much more than just interaction. So at the very beginning of the call, we actually announced that we’re actually recording the call Now, when we started recording the call, we can then analyze the agent itself and the interaction.
04:52:10.000 –> 04:52:16.000
I don’t know how many of you guys know, but in Vegas, they rate every single dealer.
04:52:16.000 –> 04:52:21.000
And the dealers, of course, there’s going to be an average, right?
04:52:21.000 –> 04:52:31.000
When a dealer pays out more, that’s going to be an issue for the house. The dealers that pay out the least They get moved to the VIP, the whale to deal with the whales.
04:52:31.000 –> 04:52:36.000
Of course, that then is a benefit for the House. We’re kind of taking the same model here.
04:52:36.000 –> 04:52:54.000
In our training model, when we start analyzing the interaction with a customer service agent, we get to understand what the payout is, what the refund is, or what the exception are for policies. And when we start recording that, understanding what the customer service agent and what the nuances, their personality.
04:52:54.000 –> 04:53:01.000
We then get to revert that and actually use it in our favor on behalf of our customers.
04:53:01.000 –> 04:53:17.000
To effectively win. Our percentages of basically it’s leaning towards a house, in our case, us and the customer. So you’re sort of is another benefit of enabling the little guy to have a little more leverage against big companies.
04:53:17.000 –> 04:53:23.000
Okay, last comment real quick. Sorry to cut you off. We’ve got a hungry room here.
04:53:23.000 –> 04:53:33.000
20 second reaction. You’re asking me to do 20 seconds. All right. All right. So 20 seconds. Policymakers need analogies. They need offline analogies.
04:53:33.000 –> 04:53:54.000
That don’t involve technology as a way to start the discussion. Because the discussion at the agent level is not happening the way it could. And we can analogize it to things like what happens when you go to the doctor’s office and talk to the receptionist or talk to the assistant. With regard to the question regarding Brett Freshman, I’m going to do shameless self-promotion in 10 seconds.
04:53:54.000 –> 04:54:16.000
I had Brett on Hearsay Culture discussing that brief a few weeks ago because I think it’s a great idea. And I said this to Brad, very hard to envision that the companies that are putting these contracts out are going to want to do that. But the idea of less efficiency a little bit and more friction in order to have knowledgeable consent makes a lot of sense.