How are you feeling about your reliance on technology, with all the news about AI, cyber-breaches and online scams?
In this episode of From Idea To Intellectual Property, we plunge into the depths of deep fake technology. We’ll get the official definition, find out just how deep fakes can be used for nefarious means, and what you can do about it. Spoiler alert: it’s tough!
Host Lisa Leong is joined by ROBIC Senior Associate and Data and Privacy Office Tara D’Aigle-Curley, discussing the technological advancements, legal ramifications, and ethical concerns surrounding this transformative, yet controversial technology.
Subscribe to From Idea to Intellectual Property to find out more about the exciting ideas turned commercial realities impacting the world we live in.
To be notified of when future episodes drop, follow us on Apple Podcasts, Spotify, or your preferred podcast platform.
Listen to the full episode here:
-
Transcript
Lisa Leong: How are you feeling about your reliance on technology with all the news about AI, cyber breaches and online scams?
It’s become the recurring nightmare, the end of humanity, ushered in not by pandemic, nuclear war or climate change, but by the machines, or to be more precise, generative AI, of which large language models like ChatGPT are early examples.
It’s hard not to feel slightly stressed or anxious.
So in this episode, we’re plunging into the depths of deepfake technology.
We’ll get the official definition, find out just how deepfakes can be used for nefarious means, and what you can do about it.
Spoiler alert, it’s tough.
Hello, I’m Lisa Leong, and this is season three of From Idea to Intellectual Property, a podcast about today’s big ideas and the IP considerations behind them.
Tara D’Aigle-Curley is a senior lawyer at ROBIC, a member of the IPH network.
She specialises in privacy and information technology issues.
She’s a member of the International Association of Privacy Professionals, and she’s based in Quebec.
So, Tara, are you a deep fake?
Tara D’Aigle-Curley:
No.
But that’s a great question.
Lisa Leong:
How would I tell if you’re a deep fake or not?
Tara D’Aigle-Curley:
Well, actually, there are ways to practice yourself and make sure that you can know if you’re in front of a deep fake or not.
So, there are websites that are created by labs, and there are some apps that you can use.
So, you can practice and make sure that you recognise if let’s say a movement is human or a mouth going on and talking is human, or there are some different signs that you can see, but sometimes the difficulty resides in the length of the time that you’re looking at the image or the video.
Lisa Leong:
For those who don’t know, what exactly is a deep fake as you would describe it?
Tara D’Aigle-Curley:
So a deep fake is synthetic media.
So you take a person’s face, their voice or another characteristic, and you swap it for another person.
So basically, you can stitch anyone quite seamlessly in a photo or video they never participated in.
Lisa Leong:
And why are people using deep fakes?
Tara D’Aigle-Curley:
People are using deep fakes for a number of reasons.
There are good reasons for using deep fakes, and there are not so good reasons for using deep fakes.
Let’s start with the good things.
There is a company that is in the US, CereProc.
CereProc is a corporation which uses deep fake technology to create voices for people who have a form of disease where they cannot use their own voice, which is great.
Or you could use deep fakes, let’s say, for your retailer, and you’d like your customers to make sure that they fit in the right clothing that you’re presenting to them.
So you could use deep fakes to put their face on their fake body, and then these people could see what they look like in the clothing without trying it on.
You can use deep fakes for a lot of purposes, and now deep fakes can be done quite easily, fast.
So there’s also these parodies of Tom Cruise or Nicolas Cage doing things that are crazy, which is kind of funny, maybe not for Tom Cruise and Nicolas Cage, but for entertainment purposes, you can also use deep fakes.
Lisa Leong:
This term started emerging, I think around 2017. Who coined it and what were they doing?
Tara D’Aigle-Curley:
Well, the term deep fake is actually a contraction of the terms deep learning and fake.
So you have deep learning technology, so AI technology in the creation of deep fakes.
And this word appeared when a Reddit user was named deep fake because they were using this technology for pornographic purposes.
So they were having their Reddit site and just putting up celebrity faces on other bodies for the dissemination of pornographic videos.
So this is where the term started to be used.
You know, since it’s associated a lot with pornography, people are trying to get away from this term and use something else, a longer expression, which would probably include deepfake technology for pornographic purposes, but that could also include good ways of using this technology.
Lisa Leong:
Deepfake technology is developing so fast, it’s hard to keep track.
Only a few years ago, it was pretty easy to spot a deepfake, because, well, everyone had 12 fingers, but we’ve come a long way since then.
It was a bit of a laugh for a while, but when you work in news, the dark side of deepfakes is everything you hear about.
Now, a disturbing trend on social media struck one of the world’s biggest stars this week, when explicit AI-generated images of Taylor Swift began to circulate on X, formerly known as Twitter.
Nowadays, most people can’t even spot the difference, which has caused a lot of people in the tech industry to start sounding the alarm and calling for wider regulation.
There have been more warnings about the impact that artificial intelligence and so-called deepfake images and videos could have on crucial elections in both the US and the UK.
A top US law enforcement official has said that the technology could incite violence, even chaos, and that tougher sentences would need to be introduced for criminal use of AI.
So, Tara, can you tell me a bit about how we’re seeing this technology used in negative ways. What’s getting governments and big techs so worried?
Tara D’Aigle-Curley:
So, the first way we’re using deepfake technology for not so good purposes, I’d say it’s for sure for pornographic purposes.So, you would have targets, either celebrities or victims of international kidnapping, let’s say, would be also victims of deepfakes in pornography, so their faces would be used.
There’s pornography revenge also, so some people are using the deepfake technology to avenge from some people that they don’t like anymore.
But there’s also different targets such as election, so of course you see more and more disinformation regarding election or political high-profile people.
For example, you’ve had a deepfake with Barack Obama, you’ve had one with Joe Biden, you’ve had one with President Zelensky also.
So more and more they’re using, those people are using deepfake technology to disinform people or to try to attend to the reputation of a political high figure.
Then you could also have the use of deepfakes for high-profile executives.
So more and more you see fraud with high-profile executives.
For example, those people have high net worth, and they would probably end up being great targets for financial fraud.
For example, in March 2019, there’s Forbes that reported that fraudsters use a deepfake to impersonate an important CEO from a UK-based energy company.
This man thought that he was on the phone with the German CEO of his parent company.
He took instructions to transfer $243,000 to a Hungarian bank account.
So this happens quite frequently now.
You know, for example, I’ve seen a statistic that is a very interesting statistic saying that deepfake fraud attempts have increased by 31 times in 2023, which is what it represents an increase of 3000% year on year.
Lisa Leong:
Wow.
Tara D’Aigle-Curley:
Yeah.
And the more it goes, the more these targets are less high profile and more people like you and I.
So let’s say any CEO of any company or any board member should be aware that there are fraud going on because anybody could impersonate anybody.
Lisa Leong:
Wow. And we’re talking about accessibility then of this technology.
So you were saying that anyone really could create a deep fake quite easily?
Tara D’Aigle-Curley:
At first, it required a lot of knowledge.
And the deep fake that we’re seeing more and more on social media that involves celebrities, they probably are done by people who really know their stuff and who have powerful computers.
But for the things that we see on TikTok or on Facebook and that we recognise as deep fakes easily because we see somebody doing something that they never did, then those are created within minutes. Sometimes it’s 30 minutes, sometimes it’s less.
Lisa Leong:
I’m just awestruck and also really scared.
Now you work in the area of privacy law. And I remember I was a computer law expert back in the day.
And when the internet first came, we were trying to bend and stretch the legal frameworks to cover the technology of the time.
How is the law going with regulating deep fakes?
Tara D’Aigle-Curley:
There are different approaches because there are different visions also of how this technology should be regulated.
For example, you have the EU who has taken a proactive approach.
So there are a number of laws that they have in place, which could end up targeting deep fakes in some kind of way.
For example, the e-commerce directive, the copyright regime, the GDPR, the AI regulatory framework.
So you have that.
And they also updated in June 2022, their code of practice on disinformation, which addresses deep fakes through fines up to 6% of global revenue for violators.
So they decided to sanction these deep fakes in some situations.
You also have the US., where there’s no federal regulation on deep fakes, but some states have passed laws governing their use, primarily focused on targeting deep fake pornography or for elections.
There’s China, who’s taken a different approach.
And it’s probably because of the increase in deep fake cases in Asia.
There’s a lot of increase in Asia.
So according to the Global Initiative Against Transnational Organizational Crime, the APAC region, which is the Asia Pacific region, experienced 1,530% increase in deep fake cases between 2022 and 2023.
So in 2019, the Chinese government introduced laws to mandate individuals and organisations to disclose when they’ve used deep fake technology in videos, in other media.
And since January 2023, there’s also regulation for the deep fake providers.
So they have to establish procedures throughout the life cycle of the technology, for example, attaining consent, validating identities, report illegal deep fakes.
So in the end, it’s like a regulation of deep synthesis technology, but it includes deep fakes, which is great.
Lisa Leong:
And what’s interesting about this is no matter how proactive the frameworks are, I’m wondering, practically speaking, how do you find and track down the perpetrator and actually call them to account in any meaningful way, Tara?
Tara D’Aigle-Curley:
Well, that’s the difficulty because sometimes you cannot even know who the perpetrator is.
There’s always different actors in creating a deep fake.
So you have, of course, like the two victims because there are two victims.
And then you have the creator of the deep fake.
Usually these people use hashing technologies to make sure that they are not linked to any of the images that they’re using.
Sometimes they’re also using blockchain to avoid being caught.
But we’re also using blockchain for detection of deep fakes, which is a great use of the blockchain technology also in the deep fake world.
Lisa Leong:
I am fascinated by how privacy can play a part in this area of deep fakes.
And I know that that is your area.
So tell me about the protection that might afford individuals.
I mean, we could start at celebrities and then go to the people on the street like you and I.
Tara D’Aigle-Curley:
Well, in Canada, the rules are the same if you’re a celebrity or if you’re not.
So the great thing about our regime in Canada, I think, is the notion of consent.
So it’s impossible in Canada to collect personal information without consent.
There is also the principle of necessity or of minimisation of data.
So all of these principles go together, and you would end up probably being able, if you ever knew the perpetrator of the deep fake, you would probably be able to complain to an authority for the violation of your privacy based on the fact that you didn’t give proper consent, or because the information that was used was not necessary for some purposes.
So this is something that’s quite interesting in Canada.
Consent is the basis for everything.
In other privacy laws, it’s different.
Sometimes you have other means or ways to collect or use personal information.
So in these cases, you would have to make sure that you would have the proper founding for collecting and using this personal information for creating a deep fake video or voice recording.
Lisa Leong:
Where would you like to see the law go? What do you think is the most effective here, Tara?
Tara D’Aigle-Curley:
I kind of like China’s approach, I have to say.
There are a lot of articles that I’ve read about deep fakes and, you know, everybody has their own opinion about how regulation should be.
I don’t really think that a ban is a great idea because there are some people that prone a ban on deep fakes.
There are ways deep fakes are useful, and by banning something is where you see it going crazy.
So I would not do that.
I also think that the EU’s approach is a bit difficult to operationalise, if we could say, you know, because 6% of a global revenue, for sure, is something that’s quite heavy, but it’s never really a company that creates a deep fake that would cause harm to someone.
It’s mostly people who are under there, like behind their computers and doing some things.
So I like China’s approach with the fact that you would tell people that you’ve created something with AI and that this information is not real.
And then the people can decide for their, like with their own minds, if the information is valuable to them or not.
This is where I see the probably best way to deal with deep fakes.
As long as you know that they’re deep fakes, then you can decide for yourself if you believe this information or not.
Lisa Leong:
So how can people become better educated about deep fakes?
Tara D’Aigle-Curley:
Well, that’s the key.
So if you want to know, if you’re in front of a deep fake, you need to see more and more deep fakes.
If I was a CEO or a board member in a company, let’s say a financial institution or an insurance company or really easy targets for deep fakes, I would definitely have policies in place and I would train my employees by listening to these fakes, by seeing these fakes.
I know it sounds crazy, but it’s probably the best way to train your employees.
Lisa Leong:
Is there a good place to go to find these technologies so that we’re not scammed in the process of finding them?
Tara D’Aigle-Curley:
There are over 100,000 AI models that can create these fakes, but fewer than 3% of AI models that can detect these fakes.
We have a problem with the deep learning because since it’s AI learning from itself, then of course, if you have a tool that detects deep fakes, the deep fakes are getting smarter, then they will try to avoid getting detected.
So these technologies have to be constantly updated.
It’s quite an exercise for people who are creating these tools, but at the moment, it’s probably the best way to prevent yourself from being in a deep fake fraud, having these tools in place.
Lisa Leong:
It’s getting to the stage where we need barcodes or something to identify ourselves, isn’t it?
This is how it happens, Tara.
Tara D’Aigle-Curley:
Yeah.
I hope not.
Or chips inserted inside us.
Lisa Leong:
And being in this area, Tara, do you just feel scared for humanity?
Tara D’Aigle-Curley:
Well, being in privacy is, it brings a lot of anxiety in this world.
But, you know, it brings up great challenges to be able to try to protect your clients in situations similar to this.
I think it kind of brings out your creativity.
So, you know, when you have big challenges, that’s the time where you need to be creative and try to find solutions that are a bit out of the box.
And privacy professionals do that every day.
So we’re kind of used to it.
I’m not saying that in 40 years from now, I will not be living in the woods without a cell phone. It’s possible.
But for the moment, it’s just a great challenge.
Lisa Leong:
Well, if you disappear, we’ll know that you’ve run away, Tara.
Tara D’Aigle-Curley:
My name may not be Tara by then.
Lisa Leong:
Well, thank you so much for your time, Tara.
And this is Lisa Leong, the real Lisa Leong signing off.
Tara D’Aigle-Curley:
Thank you.
That was fun.
Lisa Leong:
Thank you.
That was Tara D’Aigle-Curley, a senior lawyer at ROBIC, a member of the IPH network.
Thanks to our producer, Cara Jensen-McKinnon, this podcast was brought to you by IPH, helping you turn your big ideas into big business.
I’m your host, Lisa Leong.
Bye for now.