Ryan Reynolds here for Mint Mobile. One of the perks about having four kids that you know about is actually getting a direct line to the big man up north. And this year, he wants you to know the best gift that you can give someone is the gift of Mint Mobile's unlimited wireless for $15 a month.
You don't even need to wrap it. Give it a try at mintmobile.com slash switch. $45 upfront payment required equivalent to $15 per month. New customers on first three month plan only. Taxes and fees extra. Speeds lower above 40 gigabytes on unlimited.
See mintmobile.com for details. Hi everyone, Brian Kilmeade here. Thanks so much for listening to this special holiday edition of the Brian Kilmeade Show. It's where we have a chance to look back at some of our most important interviews and bring it forward again. It's when I really look forward to doing and reliving, especially when it comes to cyber technology, when it comes to cybersecurity, when it comes to social media and AI.
You have my attention. This hour, we're going to be joined by Teresa Payton, as well as Tristan Harris and Aza Raskin. They're buddies, and the latter two are co-chairs of the Center for Humane Technology. They saw some of the dangers with social media. They saw it coming. Silicon Valley saw it coming.
Maybe not as bad as it ended up being, you know, addicting it ended up becoming, but it is. And now he wants to fix it and make sure we get AI right. So here's my interview with Tristan Harris and Aza Raskin. Let's listen. With me right now in studio, we're privileged to have with us back Tristan Harris.
And I'll meet you for the first time, Aza Raskin, co-founders of the Center for Humane Technology, part of a special that Oprah put together to find out what's next in layman's terms for AI. Guys, welcome. Thank you so very much for having us. Did you say good things, Aza, about me or bad, Tristan? All good things.
It's all good. No, we had a great conversation last year, Brian. I mean, I remember it was when AI, GPT-4 was, I think, just coming out of OpenAI's model. And we had just released this talk called the AI Dilemma that really walked through all of the risks. Most people know our work through the film The Social Dilemma on Netflix, you know, which is about- That's when I first saw you.
Right, right. And, you know, the whole point of this special that we did with Oprah that came out last night on ABC, it's now on Hulu for people to watch, is actually Oprah saw this AI Dilemma talk that Aza and I gave and she was so moved. She said, people don't understand what's coming.
I want to help the American public understand this. And so she put together this special and she got Sam Altman, Bill Gates, FBI Director Christopher Wray, Marilyn Robinson, Marques Brownlee, and us to talk about the full range of issues that are facing us. You know, unemployment from AI sort of disrupting jobs, biological risks, you know, safety risks, you know, and really the issue that I think we're trying to highlight is people want to ask, is AI going to be good? Is it going to be bad? Is it going to be the promise or is it going to be the peril? And the issue is actually how fast it's coming that the downsides of AI are kind of overwhelming our society, you know, overwhelming us with deep fakes, nudification apps in schools. Already.
Yeah, already. So first off, how do you guys know each other? Oh, yeah.
We've known each other for almost 20 years now. Yep. And we both share a deep passion for understanding how human beings work and then how technology intersects with it. So my father created the Macintosh project at Apple.
Wow. So that iPad sitting there, you know, the lineage of all of that, his father started. And, you know, Tristan actually worked very early on at Apple. And one of the things we think about is like, what was the Macintosh? The Macintosh was about how do you take a complex computer and make it fit how human beings work? And what we're trying to do now is sort of a similar thing. It's like, take something very complex, AI, and help humans understand how it's going to affect the world.
And we're both... Some deep thinking. Yeah.
Well, it requires it, unfortunately. AI is so complex. Brainstorming, right?
But we're trying to simplify it for people because we have to get our head around it. You know, Aza and I are both builders. We're both tech entrepreneurs. We've raised venture capital. We built tech companies. But we're concerned because a lot of our friends, you know, built the earlier generation of technology, which is social media.
You know, our classmates in college, my classmates at Stanford. And we want to make sure that we don't make the mistakes with AI that we made with social media. Because Brian, you know, we're talking backstage for a moment that, you know, people always say, but if we don't build AI as fast as possible, what about China? Yeah. Right.
That's the ultimate... I actually just said that to you. Right. And this is the then fundamental question, which is, did we beat China to social media? But did that make America stronger or did it make us weaker? Well, financially it made us stronger.
We got two of the, you know, you have Apple and you have Bill Gates's, you know, Microsoft, two of the most powerful companies in the world. So you could say made us stronger, but might've made us weaker and more vulnerable. Correct? Correct.
Certainly to attacks. Right. Well, so the business models of social media, it's not the raw technology, it's the business. It's not the internet. People often say, is it the internet that's the problem?
And we say, no, it's not the internet that made us, that kind of made us more addicted, distracted, polarized, sexualized, harassed society. It was the business model of this engagement for profit that what is the business? How much have you paid Brian for your TikTok account or your Facebook account or your YouTube account or your Twitter account? Nothing.
How are they worth like a trillion dollar market cap? Well, it's because they're selling your attention and they have to compete for how do I addict you? How do I make you scroll? In fact, actually Aza, you want to tell the story?
Sure. So I invented infinite scroll. So that thing on your phone where you keep scrolling, it just keeps more loading more and more and more. That was your idea? That was my idea. Now I invented it in 2006. That was before social media and I made it to help people be more efficient. And then I had to watch as that invention got sucked up by social media and used to addict people, polarize people, cause people to doom scroll and now waste something like half a million human lifetimes every single month. And what I learned from that.
Can you back up? Can you say that one more time? You said what you found out dead scroll, what were you saying? Doom scrolling. Doom scrolling.
So what are you saying? Doom scrolling is, you know, that thing where people sit on TikTok or Twitter and you're just like, you can't stop scrolling because there's just so much bad news. You're like, I really should stop. But I, and I, but I got into this trance and I like woke up 10, you know, five hours later and like, why do I feel like crap? It's like I've been doom scrolling. Okay.
And then you followed up with, yeah. That it wastes a huge amount of human lifetimes per, per month, half a human, half a million human lifetimes per month goes into just scrolling. And what I learned from that is good intentions just aren't enough that we have to build technology in a different way because the way we're, we're building social media was sort of like a Jenga tower where we were like the, the social media companies were getting new benefits to society at the cost of undermining things like a shared sense of reality. And now we're at risk of AI doing the same thing, but the companies are in a race to build more and more AI benefits to get out into market as quickly as possible. And so they make benefits like the ability for anyone to make super cool AI art or generate videos or audio, but it comes at the expense of pulling out the block of people knowing what's true.
It's a lot to think of because it's very simple. I'm looking to get a product separate from tech. I'm looking for you to get a product that you want and then I want you to buy a better one or I want you to buy a replacement, whether it's a cartridge for, for a correct machine. I want you to love my coffee and that's good. That's free market. I'm looking, I want you to love the coffee and then I want you to buy the little cups. The interests are more aligned there because you want to keep buying coffee and they want to keep selling you coffee.
That makes total sense. But you're saying that you guys approach this, you smart guys approach this in your predecessors, your dad approach this. I'm just going to try to get something that people are going to want to keep using over and over again to maximize the success and to maximize the success of this company, make as much money as possible, employ as many people as possible and move forward. And you're saying, wait a second, maybe we need different principles when we're approaching this rather than looking at it as just another vacuum cleaner or coffee machine.
That's right. So you're, and you're demanding people get that free market gene out of their body for this engineering? Well, it's about what are we selling, Brian, right? So like, are we, do we want to sell our shared sense of reality? Do we want to sell kids' mental health? So right now, you know, your 401k account might have Snapchat in it, but the more Snapchat stock price is going up, the weaker the mental health of basically all of these young people, because Snapchat's main user base is like teens and preteens and their business model is not to help kids develop in a healthy way and be like a, another parent or a mentor.
360. You think of this whole thing. Yes, exactly.
Because we're competing with, with, with China in a way that's about the overall health and strength and coherence of our society. Interesting. And we talked last time. And then they've already reined it in. Yes, exactly. Because we talked, I mean, two years ago, I went on 60 minutes and did that piece on TikTok and how in, uh, in China, domestically, they regulate TikTok. They get the digital spinach version of TikTok. When you go there, you open it up and you get, uh, uh, education videos, who won the Nobel prize. Here's a patriotism video for Xi Jinping.
Here's financial advice for how to make you more wealthy. And if you open up TikTok in the United States, you don't get the same version. We get the digital fentanyl version. We get the, um, you know, this is basically, you know, the most amusing ourselves to death kind of race to the bottom, you know, that, you know, stuff. And that's going to dumb down our society over time.
And, and that's why we have to fix this. Well, let me ask you, how come they figure that out? Did we figure it out and ignore it before you came out with social dilemma? And did they figure it out and take stock in it?
The addiction, the, how it, how it damages you mentally and socially? Well, what's happening is those parts of our society, the health of those parts of society, don't show up anywhere on the company's balance sheet. So they're doing what makes sense for them, which is they're in a race to get to as many users as possible for market dominance.
And they will take whatever shortcuts are required to win that winner take all game. But they seem to care more about the mental health of their people. Oh, you mean China? Sorry, you were talking about the companies in the US. But China seems to care more about the mental health of their people.
Am I right? Well, I think that they care about the mental health and development of their young population. And so they realize that they need to regulate their social media products. And, you know, we're not doing that.
So they regulate TikTok to say you have to show educational videos. And we don't have anything like that. And I'm not saying we should do it the China way. But if we just throw our hands up and say, whatever makes the most money to put in front of your 13 year old, let's point a supercomputer at their brain.
So when they flick their finger up like this, we just activated a supercomputer to figure out the trashiest piece of material that will keep them scrolling for the longest. You run society through that for 10 years, you end up with a workforce that's not going to be healthy. The employers are not being able to employ the next generation.
This is already happening. What percentage of the engineers doing what you're doing in Silicon Valley or wherever they're located, have this much concern that you seem to have about our mental health and where this is heading? What percentage?
That's a good question. I don't actually know the percentage. But what we do know, are you rare?
Are you too rare? I think we're rare in the ability to speak clearly and publicly about it. But when we are talking to people inside of the companies, they will say, we are concerned. We just can't steer our companies. Can you on the outside, please articulate what we on the inside feel?
Because they're caught, right? Like Mark Zuckerberg could be a good guy. He could be the nicest guy in the world. But he's trapped in a business model in which he's already anchored on the stock price that's dependent on as many people scrolling Instagram for as many hours a day as humanly possible. And this is not do that. He's he's trapped. So he's asking for regulation on some level. Well, yes, he certainly doesn't need money. No. Well, exactly.
He doesn't need more money. And at the end of the day, it's like, what are we here to do? What is our legacy? What is the world we're leaving behind? What is the health of the country that we are creating? If he's a true patriot and cares about the strength and health of the United States, then we should be saying we need the laws that actually govern technology in a way that all of this technology that's affecting and constituting our minds and our psychology needs to be for benefiting us, not for harming us. I want to take a short time out, come back so we have some time on the other end just to talk about what you guys, what's next for AI, where we're at and where we're going.
Back in a moment. Celebrating the new year in style. It's the best of The Brian Kilmeade Show. First time that someone showed you evidence of AI being used to commit a crime. And what was your reaction? I've been hearing about AI for a long time, even before I became FBI director. But one of the first memories I have of dealing with it in this job was I was in a conference room and a bunch of our folks got together to show me how AI enhanced deep fakes can be created. And they had created a video of me saying things I had never said before, would never say. And I was staring at this video of myself and I found it incredibly convincing. And it really caught my attention because I kept saying, wait, that's not me.
I never said that. What is this? Tristan Harris here, Asa Raskin, still with us, co-founder of the Center for Humane Technology, the part of Oprah special that's now on Hulu, but they were kind enough to come into our studio and we'll talk to him next week on One Nation about AI, where we're heading. I need nine hours and it wouldn't be enough. But as your response to the FBI director's response, that's an average everyday American response to something and the power of it. You're feeling? Yeah, well, it's exactly right. And it sort of shows the fundamental uncomfortable truth of AI, that the promise of AI and the peril of AI cannot be separated. All of the CEOs will constantly say, oh, we want to get all of the benefits of AI, but without the risks of AI.
And it turns out that's just technically impossible to do because it's the same technology that lets you say, instantly edit a family photo on your phone is the technology that enables deep fake nudes of teen girls across America's schools. The same technology that lets us develop new antibiotics is the same technology that can create super pandemics. And so the uncomfortable truth is as the AI companies are in a race to deploy more and more powerful AI, it continually undermines the foundations of our society. And because there is no accountability, the companies are not liable for any of these sort of downstream harms. It means that they aren't incented to try to make us safer at the same time as they create new benefits.
What are you doing about it, Tristan? So what are you recommending? Yeah, so on the Oprah special, which I highly recommend people watch or our AI dilemma talk, we're recommending, you know, we're not doing anything right now in terms of laws. So there's a lot that we can do. It can start really simple, like liability, accountability, right? If you break it, you bought it. Parenti Loki, if your kid breaks something in a store, you're responsible for what the kid does. So AI is like this little kid that we birthed. And if the kid is starting to cause some havoc, we need some laws where the companies open AI, Google, Microsoft are accountable for any harms that are created. So basic liability framework. We have one in our nonprofit center for humane technology.
It's on our website. People want to check out also whistleblower protection. So right now, as you know, people often say, but the government can't regulate AI. They don't have any expertise.
That's exactly right. In lieu of that, we should have a whistleblower protections because the people who might be closest to where there's some risk or harm is we've got to protect the people inside the companies that are saying, hey, wait, there's some blinking red lights flashing. We see when the FBI even said that they actually have not been protecting their whistleblowers. And we've had them in front of Congress talking about how their lives have been ruined. Yeah.
Real quick on TikToks. It's just a step back. You were the first one to say we ban it. There's not even a question, but give me an example.
As we want to start interesting. Yeah. Well, so in November 2022, again, we went on 60 minutes and talked about how this is ridiculous.
Would you allow, imagine you're in 1968, right before the election and the Soviet Union ran television programming for the entire Western world during the cold war leading into an election. That's what TikTok is. They say they're not. They say TikTok USA is different.
But now there's actually hard evidence. So Rutgers University did this study where they looked at, well, let's just see what kinds of hashtags trend on Instagram, US versus TikTok, China. And for almost everything, the hashtags sort of are roughly the same for both, except for the trends that are useful for the CCP. And there, those trends get much more virality. Many more people see things that are the topics would for wiggers, for example, or, you know, Gaza stuff, everything that anti Israel into Israel, all of the things around Ukraine. So all the narratives that China wants to see amplified in the world, they have the power to twist the knob and make it so that people see the perspectives that they want people to see.
So now when you look around the world, you look in the US and you look at what's happening in Western countries and you say, you know, most people are getting their information from TikTok, it's the most powerful and most dominant, you know, social media app. We shouldn't be allowing this. This is ridiculous. And we were talking about this. Okay, why haven't we banned it yet? Well, as you said, going into this election, if you want to reach young people, all the politicians are trapped, whether they have to stay on the platform to try to reach young people, even though they know they want to ban it. I mean, President Biden, you know, did the executive order to ban TikTok, but he also joined TikTok like I think a few weeks later.
Same with Trump. You want to do it too. Exactly. Guys, we just scratched the surface, but we'll come back. We'll talk about it on Saturday. Great.
The Oprah special is on Hulu. Yeah. And, man, your concern has me concerned, but I appreciate you taking action.
We got to do a lot more. Thank you so much, Brian. Clarity creates agency.
Thank you. If you're interested in it, Brian's talking about it. You're with Brian Kilmeade. We've seen the explosion of some trends that have existed before, but not to this extent. Disinformation.
You know, Elon Musk is, I don't know how many hundreds of billions he has. He has been the director of misinformation. Two billion views of his spreading of disinformation, not just about the economy, but about immigrants, about minorities. We've never seen disinformation at this scale. Next, we have never seen to this scale the agitation of three trends that have been very deeply embedded in American history.
Misogyny, racism, and xenophobia. You know, the first anti-immigrant law was 1798, and Donald Trump wants to revive it. This guy's a loser. Alan Lichtman is somebody who prides himself on predicting every victory and had Trump losing. So he's wrong again.
He had Trump losing the first time, too. Teresa Payton served as the first female chief information officer for the White House during President Bush's administration, CEO and founder of Fortilous. Fortilous. That's right.
I always say that wrong. Fortilous Solutions, an author of Manipulated, inside the cyber war to hijack elections and distort the truth. So Alan Lichtman talks about misinformation or is it information that they just don't like?
That's one of the keys. Elon Musk deserve a lot of credit for exposing a lot of the misinformation in the Twitter files, which people found abhorrent. Did that whole thing and the exposure surprise you, the length and breadth in which it was taking place? It was good to see it validated. My gut told me there must have been, you know, a lot of conversations because we don't have transparency on who is the arbiter of truth when it comes to misinformation, disinformation. You hit the nail on the head, Brian.
A lot of times when somebody labels something misinformation or disinformation, it's because they don't like it, not necessarily that it's not the truth. And so it was stunning to see how prevalent it was. And I think, you know, Mark Zuckerberg kind of raised the white flag and said, well, looking back, maybe we shouldn't have done what we did. Yeah, that social media is a huge issue.
I think we're getting to the point where we have an FCC director, if he gets confirmed, that's really going to go to go to town on that. We'll see where Google Sans and other countries have done the same thing. The amount of hacking going on, led by China, but not solely China, with North Korea, of course, and Iran, too, is noteworthy. China-linked hackers stole wiretap data for telcos, FBI, and CISA, that's last week. A week ago, too, we learned about the Library of Congress gets hacked. Tell me about the significance and do you know roughly how the Chinese are doing it?
Yeah, so it's very interesting. So we've got China, Russia, North Korea, Iran, but they're not the only ones playing in this game. And for the record, they all deny they do it. But I always say, if you don't condone it, then condemn it and show us you're putting people in jail for hacking into our systems, which they don't do. But then you also have cyber criminal syndicates doing this as well. But it's very prevalent and it goes back to, we really need to have the mindset of, just assume technology will fail us. Just assume the wrong people will get access to information. So why are we collecting it to begin with? And why aren't we storing things? The technology exists today to make things anonymized, to make things access to it tokenized.
Can you tell me what that means? So in other words, only I would know where my secret items are. They're not gonna be labeled secret items.
Exactly. Or let's say, so for example, the investigation is still ongoing. We don't know exactly what, at least it hasn't been revealed to us, the hackers, who they were, what they access, what they listen to. Did they listen to everyday conversations, political campaign conversations? They did mention in the news reports that they could have potentially accessed the law enforcement surveillance database, the wiretapping. So if you're gonna assume that bad guys are gonna come after that data, that they're gonna be successful in getting that data, then it shouldn't have Brian and Teresa's conversation labeled as who we are as private citizens, right? And so that's where, first of all, should we be collecting it in the first place? And then secondly, if we are in the name of kind of law enforcement or national security or Homeland Security, then the second thing is, assume it will be stolen.
So how are we making the data, if it is stolen, useless and anonymized so that it's not something that could be used against people? Right. Where would you grade our cyber defense? Where would I grade us today based on what's coming at us?
A C minus. Scary. Is that because we don't have the best people there? Is that because technically we're not keeping up?
We have great people. We spend a lot of money and a lot of time and attention on this. We're at a point in sort of the maturation of technology innovation that we need to do a zero-based budget on this. And we need to completely reimagine how we think about cybersecurity. Right now, we're just taking kind of last year's threats, looking at what we did and saying we need to do more of these good things. But it's time to take a step back and say, let's reimagine. If we were to build how we think about cybersecurity and building that into the very fabric of how we conduct business as a nation, as our citizens who pay taxes, how would we do it from scratch? And we really do need to reimagine that.
That would certainly help because I think people have to be open to it. Just can't be any zero ramp up time. It's got to be one to zero to 1,000 when Trump takes over in a few weeks. When you go to the government, though, you're making a major pay cut. If you're like an elite cyber tech, you're cyberly competent and you excel in that area, people want to pay a lot for your skills.
If you go to government, you're never going to be rich. It's got to be like service. And is it possible to get the best in that sense of service?
I'd like to think it is because I took a pay cut to do my time at the White House under President George W. Bush. And it was an incredible honor. And it's helped your whole career, right?
Absolutely. And so what I think people need to look at is there's different ways you can serve. Many people in my family serve in the U.S. military and law enforcement. I got my opportunity to serve the country and to give back to the country. And I still do in a variety of different ways behind the scenes. And so what I would tell anybody sitting on the sidelines thinking, gosh, this is going to be a pay cut.
Doing your patriotic duty for your country by serving in any administration and giving your skills back to the country, it's your patriotic duty and it will pay off for you in the long run. So China's certainly a threat. We also with the Library of Congress, why would someone want to hack the Library of Congress? I don't think people realize the amount of research that goes on between Congress and the Library of Congress.
Over 76,000 requests were handled for the Hill last year. And so the back and forth communications from January through September were accessed. Again, we've not been told by who. We have been told that the issue that allowed that access to happen unmitigated for several months has been mitigated. The question still remains, what types of research requests did they see and what kind of data points did they see sent back to the Hill? Do you find it insane that we still are allowing TikTok in this country? You know what's interesting about TikTok is, so I'm not a huge user of TikTok, but since it was banned, I decided to at least get an account and take a look. And they spent billions of dollars responding to CFIUS, so the Foreign Control Act, and working with Oracle, which has an American origin story, to separate Americans' data and information onto the Oracle infrastructure. I would like to see, did we do an unsigned inspection of TikTok?
And what were the findings for that unsigned inspection to say, did they actually satisfy CFIUS to the spirit of the law? Yeah, if you talk to Mike Gallagher, he put up that special task force on China, said they absolutely have already hacked into some journalists, be able to take their material, and they apologized. And one person that sits on the board is a member of the Communist Party at their government, sits right by President Xi. The back door is wide open, it just so happens our algorithm is extremely attractive to Americans right now. And a lot of the stuff they learn, you have a lot of anti-Israeli stuff on there, almost no pro-Israeli things, pro-Iran.
I mean, you can't say they even don't have their hands on the newsfeed. Yeah, you have to wonder from an engineering conscious and unconscious bias, who's tweaking the algorithms there and how are the algorithms working? And so the question remains, will there be an American investor who steps up and says, well, if it doesn't pass the kind of the CFIUS sniff test, I'll buy it. And so the question remains, will somebody be able to get the funds together and would the parent company actually sell it? Yeah, it seems like the president's moving off that now because TikTok's so popular and he's so down on Silicon Valley.
Because they totally turned on him in 2020, they banned him. And he's saying, well, at least I can get on and I can get my followers, but I hope cooler heads prevail. And he starts understanding that China gets acceptance, will get access to a lot of American, be able to manipulate Americans. Can we talk about AI?
Sure. Where do you think that is heading? I mean, when it has the smartest people in the world like Altman and Musk intimidated about its potential, what should we know?
Well, I mean, a couple of things. One is we really have to understand what are the ethical guardrails around generative AI, predictive analytics using AI and AI algorithms. Who, for example, is looking at the curation of the actual data that's in these large language models?
Who's protecting it? And just because we can do something with AI, does it mean that we ethically should? Should we be making decisions in a black box on certain things, for example?
And again, who's going to be the arbiter of truth and governance of the ethics of AI? We really don't have the right standards in place. And I'm very concerned because we got it so absolutely wrong on social media. You know, the impact on our children, for example, the impact on we were just talking about algorithms suppressing some things and promoting other things. And we got it so wrong on social media. We have the chance to get it right right now. But I'm seeing the technology far outpacing our lawmakers ability to create those governance standards.
Right. I mean, we see a whole generation have no communication skills, have been stuck on their phones, cyberbullied. We're seeing the effects now countries around the world.
I think the UK the latest says no social media until you're 16. We see Australia doing the same thing. And we see because a lot of these companies have more control maybe than even the parents.
On AI, I thought it was interesting we had Joe Lonsdale on from Palantir. And he said he worries about the guardrails being too tight from government that we need to be creative and we need to be able to keep up with China. And right now that they feel that the guardrails are so tight in Congress that might hinder growth. Your thoughts?
I think that's interesting. I'd love to sit down and have coffee with him and unpack that a little bit to see where he feels like the guardrails are in place and hindering innovation. Or on their way to being too tight. Or on their way. And he could be right about that based on things that might be in committee, legislation that might be in committee.
There might be an overreaction because it's hard to explain how the black box works and how you're encountering sort of engineering confirmation bias and sort of the unconscious bias. And so he could be seeing things about pieces of legislation that are still in committee and being concerned about those. Right, well it's certainly a changing landscape. Theresa Paton's gonna stick around for a little while longer. She's the author of Manipulated Inside the Cyberwar to Hijack Elections and Distort the Truth. You listen to the Brian Kilmeade Show, don't move. Happy New Year from the Brian Kilmeade Show. Radio that makes you think, this is the Brian Kilmeade Show. Theresa Paton is back with us, cyber expert, served as the first female chief information officer for the White House during the Bush days.
I'm also still focusing on that, does it for a living. When you talk about cybersecurity, there's very few things more valuable for this incoming administration than guarding our secrets. And of course, we know China, North Korea, Iran, and Russia are just four of the main actors going against us. Theresa, something else happened, and that is Russia upset with NATO, I guess not giving up, have cut the undersea wires, the internet, to Sweden and Finland. They claim they have nothing to do with it, but we saw the ships in that area. How susceptible and how vulnerable are these undersea wires, cables?
They're very vulnerable. So we've seen the past, and it's always been deemed usually accidental, I'll say. So usually deemed accidental if a fishing vessel drags an anchor or a cargo vessel drags an anchor, that they've cut those undersea cables. And so we see this week, two cables have been damaged.
The words used by some of the country's leaders in the European area are sabotage. And so an investigation is ongoing. They're looking at the shipping channels and who's been going through those shipping channels. And Russia has been through there, China has been through there.
So they're really trying to piece that together. But it could take as long as 15 days to repair some of the damages, assuming the weather holds, to these undersea cables. I thought we were in a wireless world. So I did not know that undersea cables play such a valuable role with sophisticated countries like Sweden and Finland.
Yeah, they do. It's basically part of an overall resiliency. So you typically have a satellite, you've got the low earth orbit satellites, you've got the ones kind of further out in orbit. Then you have kind of your broadband, your cable that you're running, underground or above ground. But then connectivity, a lot of connectivity still happens via these undersea cables. So I guess that's an issue. Also, we're looking at, obviously, people are looking for advances, consolidation of power, the shrinking of the workforce because of AI.
When you go on your privately contract, you mentioned just came from Europe. How much is it on cybersecurity? And how much is your focus on AI and the next step in the advancement? Yeah, so there's a huge focus right now in businesses around how can we best leverage AI and a lot of conversations around how do we govern around sort of ethical use of AI.
So if we're going to use it to enable humans in our operations, how do we do that in a way that's trustworthy, that protects privacy, confidentiality? We don't accidentally put corporate secrets out into the public domain, which is what actually happened to Samsung. They were letting their engineers do coding and using a code assist using AI and the engineers accidentally flipped a switch.
And instead of it staying internal to Samsung, it actually went external. So there's a lot of conversations there. And what I will tell you, Brian, is what's interesting in sort of my conversations behind closed doors is they're not sure they fully see the financial impact to the bottom line in a positive way yet with AI, there's still a lot of sort of pilot and running old processes in parallel with AI. And there's a lot of conversations right now around sort of not replacing humans, but enhancing what humans can get done. So maybe sort of almost giving the lower level tasks. But always right now, what I'm finding is the smartest companies are focused on always keeping the human in the loop.
It's when you let the loop run without the human that some of the things get off the guardrails pretty quickly. Do you feel as though we're still leading the AI race? We are leading the AI race, but it is I mean, it's a neck and neck. So and same thing with quantum computing, because quantum computing is going to be sort of the next generation of true compute power. And really the country that wins the quantum computing race is going to be the country that wins sort of the next decades of innovation. How would you define quantum computing? So quantum computing, how it's different from today's computing, today's computing, no matter how elegant, seamless and sexy it looks, you know, it might have this really cool interface. It's still ones and zeros. So it's still a state of on or off. Quantum computing is stateless, so it's neither a one nor a zero. So we're going to be able to see mathematical computations done at a speed and scale we've never seen before.
Wow, I will always hear about that. Are you convinced that Trump's got the team, a cyber team? Have you seen anyone on it? I've never heard anything announced about it.
Have you? I haven't seen as much discussed yet on that. I've seen more around sort of the Department of Government Efficiency. I've heard a little bit around technology and innovation. I'm sure there's a lot of discussions already happening on cyber security. Would you be willing to help?
Absolutely be willing to serve my country and help. Absolutely. As you've already done. Theresa Payton, thanks so much.
Who's in the brain? Kill me, Joe. Keep it here. Subscribe wherever you get your podcasts.