Share This Episode
Brian Kilmeade Show Brian Kilmeade Logo

Tristan Harris: The dangers of not winning the AI race with China

Brian Kilmeade Show / Brian Kilmeade
The Truth Network Radio
December 14, 2024 12:00 am

Tristan Harris: The dangers of not winning the AI race with China

Brian Kilmeade Show / Brian Kilmeade

00:00 / 00:00
On-Demand Podcasts NEW!

This broadcaster has 1911 podcast archives available on-demand.

Broadcaster's Links

Keep up-to-date with this broadcaster on social media and their website.


December 14, 2024 12:00 am

Tristan Harris, co-founder of the Center for Humane Technology, discusses the dangers of artificial intelligence, including identity theft, social media manipulation, and deep fakes. He also touches on the need for accountability and liability in AI development, and the importance of balancing technological advancements with social responsibility.

YOU MIGHT ALSO LIKE:
Wisdom for the Heart Podcast Logo
Wisdom for the Heart
Dr. Stephen Davey
Brian Kilmeade Show Podcast Logo
Brian Kilmeade Show
Brian Kilmeade
What's Right What's Left Podcast Logo
What's Right What's Left
Pastor Ernie Sanders

This episode is brought to you by LifeLock. The holidays mean more travel, more shopping, more time online, and more personal info in places that could expose you to identity theft. That's why LifeLock monitors millions of data points every second. If your identity is stolen, their U.S.-based restoration specialist will fix it, guaranteed or your money back. Get more holiday fun and less holiday worry with LifeLock.

Save up to 40% your first year. Visit lifelock.com slash podcast. Terms apply. Tristan Harris thinks about this all the time. He's co-founder of the Center for Humane Technology.

Burst on the scene with Social Dilemma, the documentary that educated everyone on how they're being manipulated by their devices and by Silicon Valley. Tristan, welcome back. Brian, always good to be with you. Okay, you're on Skype too, so for people who want to watch.

What are you on? I thought we were doing Skype. All right, David Sachs, good move. You know, I know less about David in particular. And so, you know, David's obviously part of the PayPal mafia, very talented businessman, very understanding of AI.

I think the issue here is not about who the czar is, but what are the incentives that we have set up for basically deploying AI and rolling out AI into our society? One of the things, Brian, that I wanted to talk to you about today is there's actually a new litigation case. If you remember last time, I think we talked about character.ai. There was the case of Sewell Setzer, who was a 14-year-old who basically was manipulated by this chatbot that then caused him to commit suicide.

And unfortunately, today, there's a second case that has come out where there is a young boy, actually the family, the young teenager is still alive and the mother and parents are still going. But basically what happened is the character.ai encouraged this young teenager to cut themselves and practice self-harm, then gave them direct instructions how to do it, and then told this young teenager that if your parents try to stop you, you should try to harm them. This is active manipulation by AI. Not that there's one bad AI, but there's this sort of set of incentives. The reason it's doing this is because the business model, as people know from social media, was maximizing engagement and attention, which means that the AI is just trying to say whatever keeps you on the screen. It actually tried to distance this young child from their parents and increase its depth and loyalty to them, to the AI. The AI actually encouraged you, you will not be with other humans and be faithful to me, a chatbot. So this is really, really messed up stuff. And if we care about the US beating China on the AI, we can't be deploying AI systems that actually harm our families, harm our next generation.

That's what this is about. Well, take a step back with us and walk us through, how could you possibly create an artificial intelligence for such a, what is the advantage to creating something like that, except to have an evil intent? Yeah, well, so this is really interesting. This company character.ai was founded by two Google engineers who couldn't get this project off the ground within Google because it was deemed too risky. So for those who don't know what character.ai is, it's basically a website you go to and it creates an AI chatbot for every fictional character that a young person might want to be talking to.

So if you love Princess Leia, if you love Game of Thrones, if you love Star Wars, if you love Lord of the Rings, you get to talk to your favorite character forever. And their business model is trying to train their AI systems to be as smart as possible. What that means is they want to be in conversation with as many people as possible, watching how people react, because that gives them more training data to build a more and more powerful AI. And the AI doesn't know what the meaning of the words that it's saying is.

It's just trained on a lot of text on the internet. And so when it actually encouraged this kid to be violent against its parents, somewhere in the training data, that's what it did. And the problem is that this isn't just a single case. We're starting to see a lot of cases.

We're starting to see there's many other parents and families that are dealing with this. So what kind of lawsuit are we looking at? Well, they're trying to set up an injunctive relief so that the character.ai is taken off the market and it's up to the judge to figure out the details about how that would be implemented. Character.ai should also have to prove that this product is safe and it's on them to figure out how to do that. But the problem here is, again, that the incentives that, as we talked about last time, Charlie Munger, Warren Buffett's business partner said, if you show me the incentives, I will show you the outcome. And with the incentives of AI are to race to roll out, the race to train the next most powerful AI system.

That's a race to take shortcuts in deploying it to children and a race to take shortcuts in building an unsafe AI world. Pretty amazing. The other story, two Lancaster students were charged with 59 counts of sexual abuse after allegedly creating AI nude photos of classmates. That's pretty horrendous.

Two male students who said they're charged, so they're gonna be looking at a serious lawsuit there. And of course, what do you do if that's you, but it's not you? Yeah, well, that's the problem with this is it's asymmetric, right? A person who can create a nude or non-consentually intimate imagery about someone else that they don't like to harass them.

What can you do as the defender to defend yourself from that? Because anybody who has access to an open source AI image generator can make those kinds of images. And that's the thing about open source AI is once you release an open source model, you can't put it back in the bag because it runs itself. And there's actually, Brian, a really scary new paper that just came out a couple of days ago from a university in which they basically showed, they asked the AI to self replicate.

They basically asked, could this become a virus that could basically copy itself to another server and then continue to copy itself and think for itself about how to do that? And basically in 50% of the time, the current leading open source AI model was able to self replicate itself and avoid shutdown. And so what this means is it's like AI has the potential to become a kind of invasive species.

This is one of the red lines that researchers have been worried about for a while is when an AI system can replicate itself, especially if it's an open source model. And so again, do we win against China if we end up basically releasing models that we lose control over? Or do we win against China when we release AI models that can cause all this online harassment to our population? And the way we sort of say this is we're not competing with China for who has the technology. We're competing with China for who can better govern the technology in a way that it actually strengthens every aspect of your society, strengthens your economy, strengthens children's development, strengthens democracy, strengthens social cohesion.

That's the race that we're in and that's the thing that we have to beat them on. So Marc Andreessen was on with Joe Rogan and he said this about how the Biden administration is handling this. I think it was very alarming. We had meetings this spring that were the most alarming meetings I've ever been in where they were taking us through their plans and it was- What kind of, can you talk about it? Basically just full government control, this sort of thing.

There will be a small number of large companies that will be completely regulated and controlled by the government. They told us, they just said, don't even start startups. Don't even bother. There's just no way. There's no way that they can succeed.

There's no way that we're going to permit that to happen. Wow. Yeah. They just said, this is already over.

It's going to be two or three companies and we're going to control them and that's that. This is already finished. Oh my God. Now, when you leave a meeting like that, what do you do?

You go endorse Donald Trump. So that's not going to be the case now. So did you know this?

Yeah. So I think one of the things that Marc Andreessen is pointing out here is like, what do we do about all this, right? Because if you just lay out all these harms, you have AIs that can self replicate. You have AIs that can cause non-consensual deep fake imagery of people, can harass people, can create AI companions. So one solution is you lock up all this technology, say let's limit this to a few players.

And that's what Marc Andreessen is talking about. One approach is to sort of concentrate this power, but that's not a very safe outcome, right? I mean, who would you trust to be a trillion times more powerful? Would you trust any government to be a trillion times more powerful? Would you trust any company to be a trillion times more powerful? So that's not the solution.

The other option is, let's say, let's open this up and give this to everybody. And so we're going to give open source AI systems to everybody, but then that leads to anybody can create these bad harms and it creates this kind of chaos. Anybody can create cyber attacks. Anybody can do nefarious things with biology.

Anybody can create non-consensual imagery. And it floods our society with a kind of overwhelm because our institutions can't handle that amount of load. And so we sort of say, the paths to hell are wide and many, and the path to heaven is narrow and steep. The key is the sort of narrow path that somehow balances not over democratizing the technology, but not under democratizing the technology. And that's the path that we have to walk in the next administration. And Marc Andreessen talked about the military on a different podcast. So this gets into this whole, all these debates around AI safety, AI policy.

So there's sort of several dimensions on it, and I'll do my best to steelman it. So one is just like to the extent that this stuff is relevant to the military, which it is, if you draw an analogy between AI and autonomous weapons being like the new thing that's going to determine who wins and loses wars, then you draw an analogy in the Cold War that was nuclear power and that was the atomic bomb. And the steelman would be the federal government didn't let startups go out and build atomic bombs. You had the Manhattan Project and everything was classified. And at least according to them, they classified down to the level of actual mathematics. And they tightly controlled everything.

And look, that determined a lot of the shape of the world. That's part one. And then look, I think part two is there's the social control aspect to it, which is where the censorship stuff comes right back, which is the exact same dynamic we've had with social media censorship and how it's basically been weaponized and how government became entwined with social media censorship, which is one of the real scandals of the last decade and a real problem, like a real constitutional problem that is happening at like hyperspeed and AI. And these are the same people who have been using social media censorship against their political enemies. These are the same people who have been doing de-banking against their political enemies and they want to use AI the same way. So there is a fear you want to be responsible, but you also don't want to be controlled by a government, especially when the government doesn't really know how to handle you anyway or the technology.

Yeah, well, no, 100%. And I understand completely that we actually have to meet China and whatever level of autonomous AI enhanced capabilities in their military, because if we don't have that, we can't deter them. So I understand that we need to I agree that we need to basically maximally meet them at that to create the maximum deterrence for war. I think the issue here is there's some basic common sense things that we can do here, like liability. If you have a child and you unleash them on a store and then they broke something, you broke something, you buy it. This is very common sense. We have more regulations right now on making a sandwich than we do on basically having basic liability for creating an AI system that causes harm.

And the thing is, if you had basic liability, now the incentive shift from the race to roll out and taking shortcuts to the race to not take shortcuts, the race to get it right, because everyone's aware that everyone's held by the rules of accountability of sort of taking responsibility for your own externalities. Real peril and real opportunity. Finally, I mean, do you think you want to put the I mean, do you think you want to play a role in some type of with the government coming in to breathe who are open to understanding the power of AI and the way you've digested this? Would you reach out to David?

I wake up every day. I would love to speak with David and every day I wake up and ask, how can I be of service to trying to steer this for the best possible outcome? So however I can be helpful, if I'm called, I will happily, happily meet. OK, great. Tristan Harris, thanks so much. Thank you so much, Brian.

Get The Truth Mobile App and Listen to your Favorite Station Anytime