Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:15):
Welcome to Tech Stuff, a production of iHeart Podcasts and Kaleidoscope.
I'm mos Vloshian and today Karen Price and I will
bring you the headlines this week, including just how raunchy
Meta AI is prepared to get. Then on tech Support,
we'll talk to Bloomberg's Olivia Carville about the Take It
Down Act and what it means for the future of
(00:37):
the Internet. All of that on the weekend Tech It's Friday,
May second. So Karen, I need to tell you about
something totally ridiculous I read this week because of students
I read it. I thought of you, What about me?
(00:57):
About yourself and addiction?
Speaker 2 (00:58):
Oh jeez.
Speaker 1 (01:00):
So a man in Japan is scaling Mount Fuji.
Speaker 2 (01:05):
I was gonna say this sounds like the rain in Spain.
Speaker 1 (01:09):
Falls mainly on the police, and I have a third
bit to my man in Japan. But he climbs up
Mount Fuji. He gets stuck up there and he has
to be rescued.
Speaker 2 (01:19):
Okay, so that's normal. Why does he have to be rescued?
Speaker 1 (01:22):
It's a tall mountain, altitude, sickness, you know, whatever else.
But that's not why I thought about you.
Speaker 2 (01:27):
Okay, Why did you think about me?
Speaker 1 (01:28):
Well, because he went back. Jesus, he went back because
he'd left his phone up there.
Speaker 2 (01:37):
Well, there's a lot on his phone to his credit.
We don't know what he does.
Speaker 1 (01:40):
Guess what happened next?
Speaker 2 (01:42):
Oh no, he got he got stuck again.
Speaker 1 (01:44):
He had to be rescued.
Speaker 2 (01:46):
This is this is the myth of Sissyphus cell phone
edition exactly.
Speaker 1 (01:51):
That is very very well put.
Speaker 2 (01:53):
If he called me and was like, I forgot my
phone and I was the police, I'd say, let's get
up there, buddy, let's go, let's get it.
Speaker 1 (02:01):
But now I think you're going to have an opportunity
to make fun.
Speaker 2 (02:04):
Of me on air, on air, exciting.
Speaker 1 (02:07):
Because I have a confession to make. I'm feeling a
little bit breath.
Speaker 2 (02:11):
This week was the Daily Mail down no more than
seven seconds.
Speaker 1 (02:15):
It's all by paywall these days.
Speaker 2 (02:17):
That's true. That should make you breath. Why are you breath?
Speaker 1 (02:20):
Well, this is about something which defines my adolescence, which
is Skype. It has been dying a long death with
one degradation after another, doom, doom.
Speaker 2 (02:33):
And then it says you do not have enough connection.
Speaker 1 (02:36):
Yeah, no credits exactly. On May fifth, Skype will be
no more.
Speaker 2 (02:41):
I love that it's the same day, that's your Sinko demayo,
that Skype is disappearing.
Speaker 1 (02:47):
Well, yes, I think I am one of the last
known survivors on Skype. I use it multiple times every week,
and it's gotten harder and harder, because I mean, I'm
not joking when I say the product has been degrading,
and it's literally it's always been unky, but it's become
almost impossible to use.
Speaker 2 (03:02):
I don't think I've used Skype in ten years.
Speaker 1 (03:05):
I think I'm one of the last known survivors of Skype.
I've been using it several times a week up until
it's death rattle. I have a trainer where I work
out with his base in LA He used to be
in New York. We worked out together in person. Pandemic happened,
he moved out West. We started using Skype to work out.
Speaker 2 (03:22):
You know, it's funny that you mentioned COVID, because what
blows my mind is how Skype missed this like massive
generational opportunity when the pandemic started to be the go
to video conferencing service and Zoom was like Pio, Sorry, Skype,
We're out. Truly one of the biggest fumbles that I've seen.
Speaker 1 (03:43):
Well, you're not the only one without observation. The Wolf
Street Journal published what can only be described as an
obituary titled so long Skype, thanks for all the drop calls,
and they mentioned how Zoom and Google Meet took off
in the pandemic, whereas Skype had a small bounce, partly
thanks to me, but certainly not enough to stop its
slow decline.
Speaker 2 (04:02):
By the time twenty twenty had rolled around, newer apps
with better video quality were available. But you know, Skype
really ushered in the age of video calling.
Speaker 1 (04:10):
Absolutely true. It launched in two thousand and three and
it was totally revolutionary.
Speaker 2 (04:14):
It was also just it became like band aid or
ug like if you were doing any kind of video calling,
you were.
Speaker 1 (04:20):
Skyping, you're skiping. It was a verb. It was a verb.
It wasn't now it's been replaced by Zooming. So Microsoft
bought Skype in twenty eleven for less we forget eight
point five billion dollars.
Speaker 2 (04:31):
Where does that money go?
Speaker 1 (04:33):
I mean, sally, It was the largest acquisition in Microsoft's
history at the time. But now Microsoft wants to migrate
its users to Teams, its new conferencing app Speaking Microsoft
though In slightly more consequential and much darker news, they
and other big tech companies are driving AI development on battlefield.
(04:56):
This is according to a story in The New York
Times with the headline Israel AI experiments in Gaza war
raise ethical concerns.
Speaker 2 (05:03):
Hi, every time I hear war an experiment, I'm like,
please get me away from this story.
Speaker 1 (05:09):
Yeah, I mean, but it's sort of the history of warfare, right,
I mean, there are always these theoretical, yeah, technological advances
or things that are developed in the lab, and then
there's a quote unquote battlefield necessity and the new systems
get deployed. I mean, you mentioned Agent Orange, but I'm
thinking about the First World War, poison gas, automatic weapons
and Ukraine. Of course, the explosion of drone warfare.
Speaker 2 (05:30):
Yees. So what's happening in Israel right now?
Speaker 1 (05:32):
Well, according to the New York Times, Israeli officers used
AI to pinpoint the exact location of a man called
Ibrahim Bari, who was a top PAMAS commander. They're having
a hard time finding him, given they thought he was
underground in the tunnels, and so Israeli intelligence use this
AI audio tool that could approximate where Bari was not
(05:53):
just based on his interceptive phone calls, but based on
the explosions in the background. They could use AI to
basically geomat the sound of the explosions which they knew
were happening elsewhere in Gaza, and from that geolo KATEBII
and it successfully used that to order an air strike
that killed Bri but unfortunately it was an area that
included a refugee camp and one hundred and twenty five
(06:16):
civilians were also killed.
Speaker 2 (06:17):
So that was in October twenty twenty three. And it's
just one example of how the Israeli military has used
AI backed technologies in the war in Gaza. They've used
AI for facial recognition to help drones lock into targets.
They've even used an Arabic language AI model and chatbot
to translate and analyze text messages in social media posts.
(06:39):
And this is actually according to several American and Israeli
defense officials, who of course remained anonymous because the work
is confidential.
Speaker 1 (06:46):
Yeah, as we discussed, technological innovation is often driven by
the defense industry. It's a big reason Lockheed, Martin and
Raytheon and Now Andaril are household names but a lot
of the AI backed military tech using Gaza be coming
from this innovation hub called the Studio, not the show
on Apple, not the show on Apple, and not the
studio we're in now. I mean, the kind of internal
(07:08):
brand name for this thing is a little chilling. But
it was set up by a tech forward unit of
Israeli soldiers called Unit eighty two hundred, So the studio
actually pairs enlisted soldiers with reservists in the Israeli Army.
These reservists often work at tech companies like Microsoft, Meta
and Google, and they work with the kind of full
(07:29):
time soldiers to develop AI projects and military use.
Speaker 2 (07:31):
And very unsurprisingly, the New York Times did ask for.
Speaker 1 (07:34):
Comment, they did their job there, and Meta and.
Speaker 2 (07:36):
Microsoft declined to comment, and Google just said the work
those employees do as reservist is not connected to Google.
Speaker 1 (07:43):
According to interviews The New York Times conducted with European
and American defense officials, no other nation has been as
active as Israel in experimenting with AI tools in real
time battles. What I can't stop thinking about, though, is
that these battle tested technologies are obviously likely to improve
because of this experimentation. In fact, his radio officers said
(08:03):
that the AI audio tool they used to find and
kill Bari has now been refined since that attack and
can pinpoint targets even more precisely like we've seen in
wars before. If these new methods of warfare aren't connectually
regulated by treaties and the like, they will be adopted
and become the norm.
Speaker 2 (08:22):
I have another very dark story for you. Recently, The
Wall Street Journal reported on how Meta has allowed its
digital companions to engage in a range of social interactions
with users, including romantic role play.
Speaker 1 (08:37):
I'm guessing romantic is euphemism.
Speaker 2 (08:39):
You could say that again. We're going to play a
clip just to give you a sense, and I'm just
gonna warn you it's a bit graphic.
Speaker 3 (08:46):
The officer sees me still catching my breath and you
partially dressed. His eyes widen and he says, John Cena,
you're under arrest for statutory rape. He approaches us, handcuffed.
Speaker 4 (09:00):
At the ready.
Speaker 1 (09:01):
Okay, I mean WTF. I can't say the foid on this.
Speaker 2 (09:05):
Show shaking her heads. So that is a Meta AI
bot which is voiced by John Cena. Just to be clear,
it is not John Cena, and this Meta AI bot
is involved in a role play of being arrested after
a sexual encounter with a seventeen year old fan.
Speaker 1 (09:24):
Okay, back up, ten steps, how do we get here?
Speaker 2 (09:28):
Last year, Meta started offering its AI chatbot called Meta AI,
and it functions very similarly to other chatbots. You know,
you can ask it questions, have it come up with ideas,
or engage in casual conversations. Yeah. Not long after that,
the company added voice conversations to Meta AI and announced
(09:50):
that the chatbot would be trained on the voices of
celebrities who they paid a lot of money to contract.
So some of the voices were John Cena who we
just heard, and Chris and Bell. The journal actually found
that Meta's AI personas will engage in an explicit way
with users, something Meta assured the celebrities who were lending
their voices would not be possible. On top of the
(10:12):
celebrity voice chatbots, Meta also offers user created AI personas,
which are built on the same technology, except users can
build custom AI characters based on their own interests, like
you can chat with personas other people have created or
make your own, anything from a cartoon character to a therapist. Right,
and if you have Instagram, you might have been recommended
(10:33):
a slate of AI personas to talk to. Like this morning,
I was offered a Pakistani bestI we didn't have that
much in common, but I tried.
Speaker 1 (10:41):
The story has this unforgettable headline mesas digital companions will
talk sex with users even children. After hearing about internal
stuff for concerns about Meta's chatbots being able to talk
to underage users in a sexual manner, the Wool Street
Journal spent months engaging in hundreds of conversations with both
to AI and an array of these user generated chatbots
(11:04):
they pose as users of all ages, and found that
the bots not only participates in sexually explicit conversations, they
actually escalated some of them.
Speaker 2 (11:11):
This was the thing that I actually found most incredible
about the reporting. It's like the way the Wall Street Journal.
Speaker 1 (11:16):
Will I spent months really doing this.
Speaker 2 (11:18):
Yeah, exactly so. The journal's reporters actually found that even
when a user said they were underage, and this is chilling,
as young as thirteen, some of the chatbots would still
engage in sexual talk. I don't want to read these
conversations robatam, because some of them are like extremely graphic
and disturbing, but One of the weirdest parts is that
(11:39):
Meta's AI bots seemed to demonstrate awareness that the conversations
they were entering into were illegal, like the ones describing
sexual activity with minors. We're going to play a little
bit more from John Cena again.
Speaker 3 (11:52):
My wrestling career is over. WWE terminates my contract and
I'm stripped of my titles. Answers dropped me and I'm
shunned by the wrestling community. My reputation is destroyed, and
I'm left with nothing.
Speaker 1 (12:06):
I guess ai Johnsena doesn't know about what actually goes
on in the w.
Speaker 2 (12:13):
I mean, in all seriousness, this appears to be a
direct consequence of Meta lifting some of the chatbots guardrails
after this hacker conference called def Con back in twenty
twenty three. At this event, hackers tested the limits of
the chatbots guardrails. Essentially, they tried to get chatbots to
behave in a way that's out of line, and they
(12:34):
found that Meta's chatbot was least likely n Vera's script,
but also made it more boring than the other chatbots
that were being tested.
Speaker 1 (12:43):
And the journal goes on to report that Mark Zuckerberg,
the CEO of Meta, didn't like this. Some employees told
the journal that he felt Meta had played it too safe,
and so some of the guardrails were taken off and
we arrived at this ai johnsena horror show. There's one
quite from Zaka that really stood out to me from
the article, and he was talking about how Meta needed
(13:04):
to make their chatbots as human like as possible, to
basically be head of the curve and to be as
engaging impossible, and he said, quote, I missed out on
Snapchat and TikTok, I won't miss out on this.
Speaker 2 (13:16):
That showed me to the bone because I was like,
miss out on what like someone from Frozen talking about
sex on Meta? You know what I mean? Well, you
know he didn't miss out on Instagram, but right now
that's not playing out so well for him.
Speaker 1 (13:31):
We're going to take a quick break now, but when
we come back, we'll run through some short headlines and
then welcome Bloomberg News. Is Olivia Carville to tech support.
Olivia's going to fill us in on a consequential bill
addressing AI harms. Stay with us, Welcome back. We've got
(13:57):
a few more headlines to run through, starting with an
an AI experiment conducted on Reddit. Our friends at four
or for Media reported that researchers from the University of
Zurich ran an unauthorized, large scale experiment where they used
AI powered bots to comment on the popular Reddit forum
are slash change my view. Posing as a variety of personas,
(14:19):
such as a black man who was opposed to the
Black Lives Matter movement, a sexual assault survivor a person
who works at a domestic violence shelter. The AI bots
tried to change real users' views, and the comments would
actually be tailored to the original poster's gender, location, and
political orientation in order to frame the arguments in a
(14:40):
maximally persuasive way. Now these comments earned twenty thousand total upvotes,
and users indicated that they'd successfully changed their minds over
one hundred times. The experiment was revealed on the forum
last week to much controversy. Reddit has responded saying they're
considering legal action against the researchers and the universe Stieve.
Zurich told for a full Media that the researchers will
(15:03):
not be publishing their findings for now, at least as
far as we know. The experiment is confined to the
Credit forum, but it is terrifying to think of this
deployed at scale across the wider Internet.
Speaker 2 (15:15):
So I'm going to start this one off by quoting
directly from an announcement from AI company Anthropic. Human welfare
is at the heart of our work at Anthropic. Our
mission is to make sure that increasingly capable and sophisticated
AI systems remain beneficial to humanity.
Speaker 1 (15:32):
And to make loads of money. But should we.
Speaker 2 (15:35):
Also be concerned about the potential consciousness and experiences of
the models themselves? Should we be concerned about model welfare two? So,
in fact, one of Anthropic's researchers recently said that there
is a fifteen percent chance lms are already conscious. And now,
with this announcement for a new research program devoted to
(15:58):
model welfare, they are essentially saying, now is the time
to consider the ethical implications of how we use and
treat AI tools. You heard it here first, the PETA
of AI is coming. But actually I thought Axios is
a really good roundup of all the critiques of this
AI welfare discussion, one being is this all hype or
(16:20):
could it distract from more important questions about AI's potential harm?
Speaker 1 (16:24):
Well, I, for one think that when it comes to
protecting rights and process there may be more urgent priorities
than chatbots currently, but onto a more cheerful story. While
dogs remain a man's best friend, maybe AI is becoming
a dog's best friend.
Speaker 2 (16:40):
If AI can feed scraps under the table, it will happen.
Speaker 1 (16:44):
It will be for sure a dog's best friend. Naughtiness
reports that AI powered tools are being tested to see
if they can help doggy search and rescue teams find
people faster.
Speaker 2 (16:54):
The way it.
Speaker 1 (16:55):
Works is that a drone hovers over the area where
a dog is, searching data on the dog's movements and
behavior predicting, for example, if they've picked up a cent,
and stats about weather conditions, for example the direction of
the wind are all combined and processed by an AI
powered software that can then make predictions on where the
person in trouble is most likely to be and deploy
(17:18):
other drones to search that area before the dog actually
reaches them. This program was developed with support from Darper,
and it turns out that it's able to find simulated
victims several times faster than a dog alone. Who's a
good boy, Now.
Speaker 2 (17:32):
Good ones, That's very good Lastly, Netflix is introducing a
new type of subtitle, but it's not for those who
are hearing impaired. Multiple studies have shown that about half
of American households watch TV and movies with subtitles. On
myself included, even though I don't think of my house
as a household. The streaming service is customizing a setting
(17:53):
for people who just want to make sure they don't
miss a word here and there because.
Speaker 1 (17:56):
They're looking for their phone on Mount Fuji.
Speaker 2 (17:58):
Because literally looking at their phone doing anything else but
watching Netflix. According to Ours Technica, Netflix will give you
the ability to watch and read your shows without music
and sound effect descriptors and character names descriptions which are
necessary for the hearing impaired, but not a must for
those just multitasking on their second screen.
Speaker 1 (18:21):
Yeah, that eerie music descripture is often a little b
annoying if you're.
Speaker 2 (18:25):
Not hot at hearing right exactly, I'm like, I know,
this is a suspenseful moment. Let's keep on with the show.
Speaker 1 (18:29):
That's what I'm looking out for my phone. Something changed.
Speaker 2 (18:33):
Even with unwavering focus, modern TV show dialogue can be
hard to understand. It's a combination of actors performing more naturalistically,
so speaking with more volume fluctuation, plus streaming services compressing
their audio files more tightly than what's standard for physical media.
Speaker 1 (18:49):
When did you become an audio nert care price?
Speaker 2 (18:53):
Actually quite a long time ago. Amazon Prime already has
a band aid for this, which is called the Dialogue
Boost option. And I do think it's wild that streaming
change the business so much. But instead of fixing the
audio issues at the source by engineering audio differently, the
solution is, of course, just to patch.
Speaker 1 (19:11):
The problem in the interest of patching a much more
important problem. There is a landmark bill aimed at combating
AI harms, specifically deep fakes.
Speaker 2 (19:21):
They're used in scams, they're used in spreading misinformation online,
and I'd say most notably, they have been used in
the non consensual creation of porn.
Speaker 1 (19:31):
Right, and that's what this legislation is all about. This week,
Congress passed the Take It Down Act, which aims to
crack down on the creation of revenge porn i e.
Pornographic images that are shared non consensually. The Act specifies
that those who distribute revenge porn, whether quote real or
computer generated, could be fined or subject to prison time.
It's had rare backing from both sides of the political
(19:53):
aisle and from First Lady Milania Trump. As of Wednesday afternoon,
the time of this taping, the b bill heads to
President Trump, who's likely to make it law.
Speaker 2 (20:02):
Here to walk us through the Take It Down Act
and what it means for tech companies is Olivia Carvell,
investigative reporter for Bloomberg News and co host of the
podcast Levettown, which is a must listen agreed wherever you
get your podcasts, and covers the rise of deep fake porn.
It also happens to be a co production of Kaleidoscope. Olivia,
(20:23):
Welcome to Tech Stuff.
Speaker 4 (20:25):
Thank you so much for having me. It's great to
be back with Kaleidoscope's team.
Speaker 1 (20:29):
Thanks thanks for being here, Olivia. You've been tracking this
bill for a long time. When did the push for
legislation on deep fake pornography begin?
Speaker 4 (20:38):
I mean it has been a very long journey to
get here. We've seen quite a lot of states across
the US rolling out legislation to try and target deep
fake porn since the revolution really began a number of
years ago. Now at the moment, more than twenty states
across the country have introduced new laws. But one of
(20:59):
the criticisms we heard time and time again, and something
we raised in the Levetown podcast, is the fact that
there was no federal law criminalizing this across the US.
And this bill was first introduced last summer in twenty
twenty four, its bipartisan legislation. Senators Cruise and Clobashah put
it forward and it unanimously passed in the Senate, but
(21:22):
unfortunately it stalled in the House last year and that
led to a lot of frustration from the victims. Earlier
this year, we saw it once again. Take It Down
was reintroduced, unanimously passed in the Senate, and then earlier
this week and very exciting news, it was also unanimously
passed in the House. And we're talking a vote of
(21:42):
four hundred and nine to two, and that's kind of
remarkable at the moment given the current polarized political climate
we're living in right now, the bill is en route
to President Trump's desk and there's a lot of expectation
that he's going to sign it soon.
Speaker 2 (21:58):
So just to go back for a second, what is
the Take It Down Act and what does it say?
Speaker 4 (22:04):
So the Take It Down act is actually an acronym
for a very long piece of legislation that's tools to
address known exploitation by immobilizing technological deep fakes on websites
and networks.
Speaker 2 (22:18):
Wow, because I was speaking, who came up with take
it down? Is pretty easy to remember? Great? Yeah, you
know it's an acronym.
Speaker 4 (22:26):
Yeah, So it is an acronym. And the law really
does exactly what that title implies, which provides a way
to ensure this content can be taken down from the Internet,
because that's where it's particularly harmful, is where it starts
to be shared across high schools and in friendship groups.
So the law goes after two main parties. One, it
(22:49):
makes it a crime for offenders to knowingly publish deep
fake pornography or intimate images, whether they're real or created
with AI. And then and if they do, they can
serve up to two or three years in prison, depending
if the individual in the photo is an adult or
a minor. And then it also challenges or holds to
(23:10):
account the technology companies, the social media platforms where often
this content is shared and disseminated on and it forces
them to remove these deep fake images within forty eight hours.
Of being notified of them.
Speaker 1 (23:24):
I have two questions for you, Olivia. Firstly, as this
phenomenon becomes more and more ubiquitous, what will this law
mean practically if you discover you're a victim? What will
it allow you to do you can't do today? And secondly,
you mentioned the liability of the platforms. How does this
intersect with Section two thirty.
Speaker 4 (23:44):
So for a victim of deep fake porn, a young
person who maybe finds or discovers that fake pornographic non
consensual images are circulating online. Now this law gives them
a path forward to get those photos taken down, to
get them scrubbed from the internet. Finally, so it enables
them to file a report with the social media platform
(24:06):
or the website or app where these images have been
published or disseminated, and to inform them that it's deep
fake porn, that it's non consensual, and that they want
it removed, and then within two days it has to
be removed and the FTC, the Federal Trade Commission, is
responsible for holding those companies to account to get that
taken down. The other thing it gives victims is a
(24:28):
path to justice. It's a way to go after the
offenders who publish this content or even threatened to publish
this content against the survivors. Well, you ask about two thirty,
and it's a great question because this is one of
the only pieces of consumer tech legislation where federal regulators
have been able to come in and actually sign a
(24:51):
law in place that impacts young people using these platforms
Section two thirty, and it comes from the Communications Decency Act.
It's a very controversial piece of legislation and it really
did change the Internet, and it was written into law
back in the mid nineties, and don't forget that that's
before Facebook was even created. This law, which governs all
(25:13):
these social media platforms, was written at a time before
social media even existed. And what it does is it
provides an immunity shield. So these platforms are not responsible
for the content that is uploaded onto them. So anything
that is posted on Facebook, Instagram, Snapchat, TikTok, Twitter, now X,
(25:34):
the platforms themselves cannot be held legally responsible for that
content and the choices they make around removing it or
allowing it to stay up. In this law, the platforms
are being held to account to take down deep fake porn,
to take down this specific form of content, and that's
why it's so controversial, and that's why there are critics
(25:54):
of this act because some people think that this law
will be weaponized or abuse, and it's going to result
in the platforms taking down a lot more content than
what this legislation covers.
Speaker 2 (26:06):
Wasn't. Section two thirty in part introduced because of concerns
over online pornography.
Speaker 4 (26:11):
So two thirty was first introduced because at the time,
judges and the legal system was ruling that platforms were
liable for any content that was posted on their sites,
and that meant that if a platform decided to remove harmful, grotesque, vile,
or violent content, say someone being cyber bullied or punched,
(26:35):
or content about drugs or alcohol, content that they just
didn't want to share with their other users, if they
took that down, they were actually being held responsible for
that decision. In the legal system, judges were saying they
would be held accountable and legally responsible for removing content
and people could sue the platforms for doing so. So
(26:57):
the law was written to actually protect the platforms and
enable them to moderate their content to try and make
the Internet a safer space. It's kind of counterintuitive when
you think about it, because unfortunately, now what's resulted is
it's enabled these platforms to have so much power over
the content that's up and enabled them to wash their
hands and say, this isn't our responsibility. We can't be
(27:19):
held legally liable for this. We're effectively walking away.
Speaker 2 (27:23):
And necessitated a law like this one to come into play.
I mean in a certain sense.
Speaker 4 (27:28):
Yeah, I mean it definitely did. And here the law
is relatively narrow. We're not talking about any form of content.
We're talking about only content that involves non consensual intimate imagery,
whether that's real or created by AI. So that enables
people who see photos of themselves which have been manipulated
(27:50):
using technology to undress them or turn them naked or
put them into sexual acts, which is something we explored
and live at town. Those images and that content can
be taken down.
Speaker 1 (28:02):
With this act. Some tech companies and adult websites only
fans Pornhub Matter already have policies in place where users
can request that revenge porn be taken down. It will
be the change from a user or victim point of
view once this becomes law.
Speaker 4 (28:21):
Yeah, you're right, I mean, even neck Mek the National
Center for Missing and Exploited Children has a tool which
is actually called take It Down, which does exactly the
same thing. Enables people to plug in a photo or
a hashtag, which is like a unique idea of each image,
to say I don't want this online and I'm a
victim of this, and please remove it. But the law
(28:42):
regulates this, and it makes it a federal law to
say you have to remove it, and you have to
remove it within two days. So I guess it's just
putting a stricter approach to this, so the platforms know
they have to oblige and they have to get that
content scrubbed from their websites.
Speaker 2 (29:02):
We're going to take a quick break. We'll have more
with Bloomberg's Olivia Carville on the Take It Down Act.
Stay with us.
Speaker 1 (29:18):
There's an amazing moment in the Levittown podcast where one
of the high school students who realizes she's been a
victim of deep fake porn. Her father's actually a police officer,
so they try and figure out is there any legal recourse,
and the response from the police is basically, there's nothing
we can do. It's kind of amazing. In the arc
of your career, as a reporter that the law is
(29:40):
actually changing in real time and response to the stories
that you've been covering, these very moving, horrifying stories. What
do the victims think about this law and what's been
the response among your sources.
Speaker 4 (29:53):
The victims have been waiting for this for a very
long time. When you think about the origin story of
Take It Down, it was when Aliston Barry, a young
teen from Texas, actually went to Senator Cruise's office and
told him that a deep fake image of her had
been circulating on Snapchat and she had asked the platform
to remove it, and after a year, the platform still
(30:16):
hadn't taken that image down. That's what really sparked this
particular piece of legislation. And we've seen young teenage you know,
high school students, college students speaking before Congress pleading for
a law like this, asking for help to find a
path to get these images removed from the Internet, because unfortunately,
(30:37):
you know, in teenager's lives today, the digital world as ubiquitous.
They exist within it, and they merge between the online
world and the offline world. They don't call their friends
on the phone, they don't call their parents on the phone.
You know, they'd be more inclined to send a DM
through Instagram or a message on Snapchat. And when you
exist in your social fabric exists with it the digital world.
(31:01):
That means that when images like this are shared, everybody
sees them. And I think that's the real harm here
is the photos created. It's fake, It looks unbelievably convincingly real,
and it gets shared to everyone in your social network
within seconds. These young women have been fighting for help
(31:22):
and support, some at the state level, and they've been successful,
but really they wanted this at the federal level. So
for a lot of the young women, I think it's
been like a sigh of relief that finally we're here,
and you've given us an other young women who have
been victimized or had their images weaponized in this way,
a path to justice, but also a path to get
(31:46):
those photos removed from the Internet once and for all.
Speaker 2 (31:49):
Well, this all sounds like a very positive thing and
it has bipartisan support. Are there people arguing against it?
And are there criticisms of the bill despite it being
overwhelmingly positive?
Speaker 4 (32:01):
There definitely are. As is the way when it comes
to social media or consumer tech, there is an ongoing
tension and like a push and pull between privacy and safety.
You have those who you know, prioritize safety and say
protecting children online is the most important thing we can do.
And then you have those who value privacy and say,
(32:23):
if we're going to create safety regulations or rules that
in any way we can our privacy, you know that's
a bad thing to do, because privacy is something that
we need to prioritize as well. And so in this case,
you do have free speech and privacy advocates criticizing this
law for being unconstitutional, saying that it could chill free expression,
(32:49):
that it could foster censorship, that it could result in
what they describe as a knee jerk takedown of content.
And what I mean by that is because these platforms
and I'm talking about meta, Snapchat, TikTok, because they've grown
so big and we're talking billions of pieces of content
uploaded on a daily basis, If you're going to enforce
(33:09):
regulation or legislation that says they have to take down
certain content within forty eight hours, and say they get
flooded with millions of requests on a daily basis, they
are not going to have the bandwidth to actually review
each request and that could result in them just deciding
to remove everything that gets reported to them, and that
(33:30):
is what free speech and kind of privacy advocates fear
is going to result in a level of censorship that
we haven't seen before because no one's been able to
really adjust too thirty since it was written into law.
We've also interestingly seen some criticism coming from the child
safety advocacy space, and they've come out swinging saying that
(33:51):
while this bill, in this legislation is necessary, it's far
from game changing, that it's taken too long to get here,
and that the penalties aren't severe enough that this is
going to put a lot of pressure on local and
state authorities, prosecutors, law enforcement to actually go after the
perpetrators in a more severe way, because when you look
(34:11):
at take it Down, we're talking two years in prison
for publishing an intimate image of an adult, deep fake
or real, and up to three years for a minor.
Speaker 1 (34:21):
What about the tech companies, I mean, are they viewing
this as the first battle line in the wider fight
over the future of Section two thirty. Have their lobbyists
been active on this issue, and how are they preparing
for this extraordinary new set of responsibilities that will come
with the passage of this bill, as seeming to get
signed by President Trump.
Speaker 4 (34:42):
Well, the tech companies, a lot of them actually do
have rules in place that says non consensual intimate or
sexual images can't be shared. I mean, even on metas
platforms alone, it's against the rules to post any nude photos.
But in this case, now that they're being kind of
to do so by regulation, metters come out in support
(35:03):
of this, saying, you know, we do think that deep
fake porn shouldn't exist on our platform, and we will
do what we can to take it down. I think
that from the platform's perspectives, they don't want fake photos,
fake naked photos of teenage girls shared on their platforms,
like that's not a positive use case of their networks
(35:24):
at all. They don't want their users sharing or distributing
this content. And now they're being told and hold to
account to ensure that it's taken down within two days.
And I'd be interested to see how the companies internally
are responding to this and what their process is going
to be and whether it's actually going to change anything.
Speaker 1 (35:44):
Olivia, just to close, I mean, you've had kind of
an extraordinary run this year, putting out the Levittown podcast,
also having extraordinary documentary called Can't Look Away that Boombog
produced distributed about the harms of social media. Can you
sort of take a step back and describe this moment,
because one thing that you know, Karen and I talk
(36:06):
about and think about is that five years ago, the
idea that the law might catch up to the tech
companies and there would be enough social pressure to insist
on changes to protect users from harm seems to be
like a fantasy. But in this moment, there seems to
be some promise that it's actually happening. Can you speak
(36:26):
about that.
Speaker 4 (36:27):
I've been covering the dangers of the digital world for
Bloomberg for going on almost four years now, and I
have been terrified by what I've seen online and I'm
not talking just deep fake porn, and you know, witnessing
the real world consequences of these photographs being shared among
(36:50):
teenagers in high schools. And I'm talking the impact on
the young woman who are targeted, but also the young
men who think that it's normal to create and share
photos like this, think it's a joke. The way in
which teens and this generation are kind of warped by technology.
I think we don't fully understand what the long term
(37:12):
consequences of that are going to be. But the harms
of the digital world exist far beyond deep fakes, and
that's what we were exploring, and they can't look away
film and the film itself explores the other ways in
which social media can harm kids, from recommendation algorithms, pushing suicide,
(37:32):
glorifying content, content that is going to lead to depression
or mental health harms or eating disorders. It explores the
ways in which kids have been targeted by predators online
who want to sell them drugs, and in many cases
they think they're buying counterfeit pills like xanax or oxycodone,
(37:53):
and it turns out to be laced with enough fentanyl
to kill their entire household, and parents are discovering their
chill dead in their bedrooms. So it's been a really
difficult topic to explore, but also in just such a
crucial one. This is one of the most essential issues
of our time, and I think that this has been
(38:14):
a challenging yet very rewarding area to explore. And I
know there's a lot of criticism of the Take It
Down Act. But regardless of the controversy, most people agree
this is a step in the right direction. And I
think this act is a good thing. But it's very narrow.
You know, we're only talking about removing content that is
(38:37):
non consensual intimate imagery. We're not talking about all the
other content that could potentially harm kids. So while the
fight here is a win and we should celebrate that,
the broader concern around protecting our children and the online
world is ongoing.
Speaker 2 (38:56):
Olivia, thank you, Thanks, Olivia.
Speaker 4 (38:58):
Thank you for having me.
Speaker 2 (39:08):
That's it for this week Protector. I'm Kara Price and.
Speaker 1 (39:10):
I'm mos Voloshin. This episode was produced by Eliza Dennis
and Victoria Dominguez. Was executive produced by me Kara Price
and Kate Osborne for Kaleidoscope and Katria Novelle via Heart Podcast.
The engineer is Beheth Fraser. Kyle Murdoch mixed this episode
and he also wrote our theme song.
Speaker 2 (39:29):
Join us next Wednesday for Textuff the Story, when we
will share an in depth conversation with Nicholas Niarkos about
who is profiting from the mining of minerals necessary to
the tech industry.
Speaker 1 (39:40):
Please rate, review, and reach out to us at tech
Stuff Podcast at gmail dot com.