This episode is brought to you by Shopify. Forget the frustration of picking commerce platforms when you switch your business to Shopify, the global commerce platform that supercharges your selling, wherever you sell. With Shopify, you'll harness the same intuitive features, trusted apps, and powerful analytics used by the world's leading brands. Sign up today for your $1 per month trial period at Shopify.com slash tech, all lowercase.
That's Shopify.com slash tech. What's new from Apple? There's the new iPhone 16 Pro, built for Apple Intelligence. And it comes with the all new camera control, giving you an easier way to quickly access your camera tools. The new Apple Watch Series 10 has our biggest display and our thinnest design ever. And this? It's the sound of active noise cancellation, now available on one of two new AirPods 4 models.
So quiet. Check out all of the new products and new features at apple.com. You can even buy yourself something new. See apple.com for product availability updates. Apple Intelligence coming this fall. With me right now in studio, we're privileged to have with us back, Tristan Harris, and I'll meet you for the first time, Aza Raskin, co-founders of the Center for Humane Technology, part of a special that Oprah put together to find out what's next in layman's terms for AI. Guys, welcome. Thank you so very much for having us. All right. Did you say good things to Aza about me or bad Tristan?
It's all good. We had a great conversation last year, Brian. I mean, I remember it was when AI, GPD4 was, I think, just coming out of an AI's model. And we had just released this talk called the AI dilemma that really walked through all of the risks. Most people know our work through the film, The Social Dilemma on Netflix, you know, which is about... That's when I first saw you.
Right. And, you know, the whole point of this special that we did with Oprah that came out last night on ABC, it's now on Hulu for people to watch, is actually Oprah saw this AI dilemma talk that Aza and I gave, and she was so moved. She said, people don't understand what's coming.
I want to help the American public understand this. And so she put together this special and she got Sam Altman, Bill Gates, FBI Director Christopher Wray, Marilyn Robinson, Marquez Brownlee, and us to talk about the full range of issues that are facing us. You know, unemployment from AI sort of disrupting jobs, biological risks, you know, safety risks. And really the issue that I think we're trying to highlight is people want to ask, is AI going to be good or is it going to be bad? Is it going to be the promise or is it going to be the peril? And the issue is actually how fast it's coming that the downsides of AI are kind of overwhelming our society, you know, overwhelming us with deep fakes, nudification apps in schools. Already.
Yeah, already. So first off, how do you guys know each other? Oh, yeah.
We've known each other for almost 20 years now. Yeah. And we both share a deep passion for understanding how human beings work and then how technology intersects with it. So my father created the Macintosh project at Apple.
Wow. So that iPad sitting there, you know, the lineage of all of that, his father started. And, you know, Tristan actually worked very early on at Apple. And one of the things we think about is, like, what was the Macintosh? The Macintosh was about how do you take a complex computer and make it fit how human beings work? And what we're trying to do now is sort of a similar thing. It's like take something very complex AI and help humans understand how it's going to affect the world. And we're both deep thinking. Yeah.
It requires that. Unfortunately, AI is so complex. Brainstorming.
But we're trying to simplify it for people because we have to get our head around it. You know, Aza and I are both builders. We're both tech entrepreneurs. We've raised venture capital. We've built tech companies. But we're concerned because a lot of our friends, you know, built the earlier generation of technology, which is social media.
You know, our classmates in college, my classmates at Stanford. And we want to make sure that we don't make the mistakes with AI that we made with social media. Because Brian, you know, we're talking backstage for a moment that, you know, people always say, but if we don't build AI as fast as possible, what about China? Yeah. Right. That's the answer.
I actually just said that to you. Right. And this is the fundamental question, which is, did we beat China to social media? But did that make America stronger or did it make us weaker? Well, financially, it made us stronger.
We got to you know, you have Apple and you have Bill Gates is, you know, Microsoft, two of the most powerful companies in the world. So you could say made us stronger, but might have made us weaker and more vulnerable. Correct. Correct.
Certainly to attacks. Right. Well, so the business models of social media, it's not the raw technology, it's the business. It's not the people often say, is it the Internet that's the problem?
And we say, no, it's not the Internet that made us that kind of made us more addicted, distracted, polarized, sexualized, harassed society. It was the business model of this engagement for profit. What is the business? How much have you paid, Brian, for your TikTok account or your Facebook account or your YouTube account or your Twitter account? Nothing.
How are they worth like a trillion dollar market cap? Well, it's because they're selling your attention and they have to compete for how do I addict you? How do I make you scroll? In fact, actually, Aza, you want to tell the story?
Sure. So I invented infinite scroll. So that thing on your phone where you keep scrolling, it just keeps more loading, more and more and more. That was your idea? That was my idea. Now, I invented it in 2006. That was before social media and I made it to help people be more efficient. And then I had to watch as that invention got sucked up by social media and used to addict people, polarize people, cause people to doom scroll.
It now weighs something like half a million human lifetimes every single month. And what I learned from that. Can you back up? Can you say that one more time? Yeah. You said what you found out, dead scroll? Doom scrolling. Doom scrolling.
So what are you saying? Doom scrolling is, you know, that thing where people sit on TikTok or Twitter and you just can't stop scrolling because there's just so much. Doom scrolling because there's just so much bad news. You're like, I really should stop, but I got into this trance and I woke up 10, you know, five hours later and like, why do I feel like crap? It's like I've been doom scrolling. Okay.
And then you followed up with? Well, that it wastes a huge amount of human lifetimes per month. Half a million human lifetimes per month goes into just scrolling.
And what I learned from that is good intentions just aren't enough. We need to build technology in a different way because the way we're we're building social media was sort of like a Jenga tower where we were like the social media companies were getting new benefits to society at the cost of undermining things like a shared sense of reality. And now we're at risk of AI doing the same thing, that the companies are in a race to build more and more AI benefits to get out into market as quickly as possible. And so they make benefits like the ability for anyone to make super cool AI art or generate videos or audio, but it comes at the expense of pulling out the block of people knowing what's true.
It's a lot to think of because it's very simple. I'm looking to get a product separate from tech. I'm looking for to get a product that you want.
And then I want you to buy a better one or I want you to buy a replace it, whether it's a cartridge for for a correct machine. I want you to love my coffee. And that's good. That's free market.
I'm looking. I want you to love the coffee. Then I want you to buy the little cops. Yeah. Maybe the cops are more aligned there because you want to keep buying coffee. Right. They want to keep selling you coffee.
That makes total sense. But you're saying that you guys approach this, you smart guys approaches in your predecessors, your dad process. I'm just going to try to get something that people are going to want to keep using over and over again to maximize the success and to maximize the success of this company.
Make as much money as possible, employ as many people as possible and move forward. And you're saying, wait a second. Yeah. Maybe we need different principles when we're approaching this rather than looking at it as just another vacuum cleaner or coffee machine.
That's right. So you're and you're demanding people get that free market gene out of their body for this engineering? It's about what are we selling, Brian?
Right. So like, are we do we want to sell our shared sense of reality? We want to sell kids mental health. So right now, you know, your 401k account might have Snapchat in it, but the more Snapchat stock price is going up, the weaker the mental health of basically all of these young people, because Snapchat's main user base is like teens and preteens.
And their business model is not to help kids develop in a healthy way and be like another parent or a mentor. You're thinking about 360. You think of this whole thing. Yes, exactly. Because we're competing with with with China in a way that's about the overall health and strength and coherence of our society. Interesting.
And we talked last time. And then they've already reined it in. Yes, exactly. Because we talked I mean, two years ago, I went on 60 Minutes and did that piece on TikTok and how in in China, domestically, they regulate TikTok. They get the digital spinach version of TikTok. When you go there, you open it up and you get education videos who won the Nobel Prize. Here's a patriotism video for Xi Jinping.
Here's financial advice for how to make you more wealthy. And if you open up TikTok in the United States, you don't get the same version. We get the digital fentanyl version. We get the you know, this is basically the most amusing ourselves to death kind of race to the bottom.
You know that, you know, stuff. And that's going to dumb down our society over time. And that's why we have to fix this.
Let me ask you, how come they figure that out? Did we figure it out and ignore it before you came out with Social Dilemma? And did they figure it out and take stock in it? The addiction?
How it damages you mentally and socially? What's happening is those parts of our society, the health of those parts of society don't show up anywhere on the company's balance sheet. So they're doing what makes sense for them, which is they're in a race to get to as many users as possible for market dominance.
And they will take whatever shortcuts are required to win that winner take all game. But they seem to care more about the mental health of their people. Oh, you mean China? Sorry, you were talking about the companies. I was talking about the companies in the US. But China seems to care more about the mental health of their people.
Am I right? Well, I think that they care about the mental health and development of their young population. And so they realize that they need to regulate their social media products. And, you know, we're not doing that. So they regulate TikTok to say you have to show educational videos and we don't have anything like that. And I'm not saying we should do it the China way. But if we just throw our hands up and say whatever it makes the most money to put in front of your 13 year old, let's point a supercomputer at their brain.
So when they flick their finger up like this, we just activated a supercomputer to figure out the trashiest piece of material that will keep them scrolling for the longest. You run society through that for 10 years, you end up with a workforce that's not going to be healthy. Employers are not be able to employ the next generation.
This is already happening. What percentage of the engineers doing what you're doing in Silicon Valley or wherever they're located have this much concern that you seem to have about our mental health and where this is heading? What percentage?
That's a good question. I don't actually know the percentage, but what we do know. Are you rare?
Are you too rare? I think we are rare in the ability to speak clearly and publicly about it. But when we are talking to people inside of the companies, they will say, we are concerned. We just can't steer our companies. Can you on the outside please articulate what we on the inside feel?
Because they're caught, right? Mark Zuckerberg could be a good guy. He could be the nicest guy in the world, but he's trapped in a business model in which he's already anchored on the stock price that's dependent on as many people scrolling Instagram for as many hours a day as humanly possible.
And this is not do that. He's he's trapped. So he's always asking for regulation on some level. Well, yes, he certainly doesn't need money. No. Well, exactly.
He doesn't need more money. And at the end of the day, it's like, what are we here to do? What is our legacy? What is the world we're leaving behind? What is the health of the country that we are creating?
If he's a true patriot and cares about the strength and health of the United States, then we should be saying we need the laws that actually govern technology in a way that all of this technology that's affecting and constituting our minds and our psychology needs to be for benefiting us, not for harming us. I want to take a short time out, come back so we have some time on the other end just to talk about what you guys, what's next for AI, where we're at and where we're going. Right. Great.
Back in a moment. Expanding your knowledge base. It's the Brian Kilmeade show. From the Fox News podcast network, I'm Janice Dean, Fox News senior meteorologist. Be sure to subscribe to the Janice Dean podcast at Fox News podcast dot com or wherever you listen to your podcasts.
And don't forget to spread the sunshine. Do you remember the first time that someone showed you evidence of AI being used to commit a crime? And what was your reaction? I've been hearing about AI for a long time, even before I became FBI director. But one of the first memories I have of dealing with it in this job was I was in a conference room and a bunch of our folks got together to show me how AI enhanced deep fakes can be created. And they had created a video of me saying things I had never said before, would never say. And I was staring at this video of myself and I found it incredibly convincing. And believe me, it caught my attention. I could say, wait, that's not me.
I never said that. What is this? Tristan Harris here, Asa Raskin, still with us, co-founder of the Center for Humane Technology. They're part of Oprah's special that's now on Hulu. But they were kind enough to come into our studio and we'll talk to them next week on One Nation about AI, where we're heading.
I need nine hours and it wouldn't be enough. But, Asa, your response to the FBI director's response, that's an average everyday American response to something and the power of it. You're feeling? Yeah, well, it's exactly right. And it sort of shows the fundamental, uncomfortable truth of AI, that the promise of AI and the peril of AI cannot be separated. All of the CEOs will constantly say, oh, we want to get all of the benefits of AI, but without the risks of AI.
And it turns out that's just technically impossible to do because it's the same technology that lets you, say, instantly edit a family photo on your phone is the technology that enables deep fake nudes of teen girls across America's schools. The same technology that lets us develop new antibiotics is the same technology that can create super pandemics. And so the uncomfortable truth is, as the companies are in a race to deploy more and more powerful AI, it continually undermines the foundations of our society. And because there is no accountability, the companies are not liable for any of these sort of downstream harms. It means that they aren't incented to try to make us safer at the same time as they create new benefits.
What are you doing about it, Tristan? What are you recommending? Yeah, so on the Oprah special, which I highly recommend people watch, or our AI dilemma talk, we're recommending, you know, we're not doing anything right now in terms of laws, so there's a lot that we can do. It can start really simple, like liability, accountability, right? If you break it, you bought it, parenti loci, if your kid breaks something in a store, you're responsible for what the kid does. So AI is like this little kid that we birthed, and if the kid is starting to cause some havoc, we need some laws where the companies, OpenAI, Google, Microsoft, are accountable for any harms that are created. So basic liability framework. We have one in our nonprofit Center for Humane Technology.
It's on our website. People want to check out. Also whistleblower protection. So right now, as you know, people often say, but the government can't regulate AI.
They don't have any AI expertise. That's exactly right. In lieu of that, we should have a whistleblower protections, because the people who might be closest to where there's some risk or harm, we've got to protect the people inside the companies that are saying, hey, wait, there's some blinking red lights flashing.
We see when the FBI even said they actually have not been protecting their whistleblowers, and we've had them in front of Congress talking about how their lives have been ruined. Yeah. Real quick on TikTok. It's a step back. Yep. You were the first one to say we ban it. There's not even a question.
Give me an example. You want to start it, Tristan? Yeah. Well, so in November 2022, again, we went on 60 Minutes and talked about how this is ridiculous.
Would you allow imagine you're in 1968, right before the election, and the Soviet Union ran television programming for the entire Western world during the Cold War leading into an election. That's what TikTok is. They say they're not. They say TikTok USA is different. But now there's actually hard evidence. How?
Rutgers University. So Rutgers University did this study where they looked at, well, let's just see what kinds of hashtags trend on Instagram, US, versus TikTok, China. And for almost everything, you know, the hashtags sort of are roughly the same for both, except for the trends that are useful for the CCP. And there, those trends get much more virality.
Many more people see things that are... The topics would for Uyghurs, for example. Uyghurs or Israel and Gaza stuff, everything that... Anti-Israel.
Anti-Israel. All of the things around Ukraine. So all the narratives that China wants to see amplified in the world, they have the power to twist the knob and make it so that people see the perspectives that they want people to see. So now when you look around the world, you look in the US and you look at what's happening in Western countries, and you say, you know, most people are getting their information from TikTok.
It's the most powerful and most dominant social media app. We shouldn't be allowing this. This is ridiculous.
And we were talking about this. OK, why haven't we banned it yet? Well, as you said, going into this election, if you want to reach young people, all the politicians are trapped. They have to stay on the platform to try to reach young people even though they know they want to ban it. I mean, President Biden did the executive order to ban TikTok, but he also joined TikTok, I think, a few weeks later. He's staying with Trump. He wants to do it too.
Exactly. Guys, we just scratched the surface, but we'll come back. We'll talk about it on Saturday.
The Oprah special is on Hulu. And man, your concern has me concerned, but I appreciate you taking action. We got to do a lot more. Thank you so much, Brian. Clarity creates agency. Thank you.