Do you ever feel like your phone knows what you’re thinking? What if your computer could tell how you’re feeling? Or could a robot be your therapist?
Emotion AI is an evolving technology, using algorithms to analyse how people feel through words, audio and video. From healthcare to chatbots to transportation, this episode unpacks the many diverse benefits Emotion AI might have on people and society.
In this episode ABC broadcaster Lisa Leong is in conversation with Alice Tseng, a Principal at Smart & Biggar. They unpack how AI can analyse human emotion, and how it might respond to the needs of people, such as 24/7 therapy, detecting false insurance claims or even determining if someone is safe to drive.
For more insights on the importance of IP in turning ideas into commercial realities, be sure to follow From Idea to Intellectual Property.
To be notified of when future episodes drop, follow us on Apple, Spotify or your preferred podcast platform.
Listen to the full episode here:
-
Transcript
Hey, Siri.
Uh-huh.
It’s Lisa Leong.
How am I feeling today?
You’re okay, and I’m okay, and this is the best of all possible worlds.
Thanks, Siri.
You’re welcome.
Do you ever get the feeling that your phone might actually know you better than anyone else?
It knows what you’re Googling, it knows where you live, and with new developments in Emotion AI, it might soon be able to really label how you feel.
Hello, I’m Lisa Leong, and welcome to season two of From Idea to Intellectual Property.
It’s a podcast about today’s big ideas and the IP considerations behind them.
Alice Tseng is a powerhouse combination of regulatory lawyer, patent agent, pharmacist, and life sciences star.
She’s a principal at Smart & Biggar in Canada.
Hello, Alice.
Hello, Lisa.
I’ve heard about AI in healthcare, mainly in terms of patient intake and management.
But you’ve been advising in a really interesting area, Emotion AI.
In a nutshell, what is Emotion AI?
Emotion AI is basically just using technology, whether it is text, whether it is audio, whether it is video, in terms of detecting and interpreting someone’s emotion and now using that to help them.
And how can that be of benefit in healthcare?
What are some of the areas?
Not everyone actually is able to understand their emotions.
And even for people who can, not everyone can actually communicate their emotions, whether because they can’t or they don’t want to.
So this could come up in situations like depression, it could be stress, it could be anxiety, it could be dementia, or some people are actually just not as verbal.
Let’s drill down into one area where Emotion AI is currently being used and researched, and that’s in the area of depression.
So how would AI be used, as you say, to pick up on emotion?
How does that work?
You know, whether it’s text, audio, or video, as an example.
So text would be more substantive content in terms of actual words.
Certain words are associated more with positive sentiment versus negative sentiment.
In terms of audio, audio would be things like, you know, what’s the energy in your voice, your intonation, pausing.
Video might be facial expression.
And so as an example, it could be a chatbot or it could be a robot.
You know, you’re speaking to a robot therapist and they’ll ask you certain questions.
Depending on how you respond, both in terms of content and how you’re doing it, they can either continue on that specific topic or go to a different topic.
As an example, you know, when you’re responding and you have a downward gaze, then they know this might be, you know, a more problematic area for you.
The robot might ask you further about that.
Whereas if you’re responding quickly, for instance, they might just move on because this is not really an issue for you.
And then how would this information then be used?
So it can be different things, whether it’s a robot or a chatbot.
They could, sometimes repeating back or rephrasing what you’ve said or making it into a more positive way, as an example, perhaps you’ve said things like, I’m a failure.
And maybe they’ll say things like, this was a failure, or, you know, the certain event was a failure, not you are a failure.
You know, if you’re hooked up with sensors, they might be able to assess your heart rate in terms of how you’re feeling and adjust the conversation that way.
And in fact, why don’t we go into using Emotion AI beyond detecting how someone is feeling in the area of dementia?
With dementia, a lot of people can be quite anxious and agitated.
And having the right music can be huge in terms of alleviating that aspect.
So music can help in so many different areas.
They can actually, based on your biometric data, as well as some of your self-assessment data, curate music, which will help you in terms of maybe alleviating some of your anxiety.
Dementia, like in a lot of cases, people aren’t able to articulate how they’re feeling.
And it’s very exhausting for caregivers.
The reality is, I guess, if you have dementia, you may not always have the same caregiver, it’s not always family.
So they don’t actually know you that well.
And even for the people who do know you, they’re busy, right?
And they don’t need this extra thing to have to figure out all the time, how are you feeling?
What can I do to make things better?
Of course, they will do that, but it’s always nice to have technology to help them.
I think one of the issues with dementia is, as you mentioned, caregiver, compassion fatigue.
And so having assistance for those downtimes that, you know, so that the person with dementia could always be looked after something or someone at all stages, that sounds incredibly helpful.
Yeah, I mean, one benefit or another benefit of Emotion AI is it’s always accessible.
Let’s say it was more depression, okay?
If you wanted to communicate with someone, something at 3 o’clock in the morning, you may not have a caregiver at that time.
But if you had this robot therapist, whenever you want, 24 hours a day, that would be accessible.
You don’t have to plan it in advance because not everyone’s going to know that I need a therapist at a certain time.
And then, I mean, certainly from a cost, a time, logistics perspective, there are definitely benefits in terms of technology.
I’m in no way saying this is a replacement for a human, but it’s supplementary.
It can be a start.
I can imagine extrapolating from that, that the data set that you could collect at all times would be quite useful for the physician, for the experts to then analyse over time.
That’s quite true.
Sometimes Emotion AI can be used in conjunction with human therapists.
So for instance, what I was saying earlier, if you had that conversation with a robot therapist, then maybe there’d be a transcript of the information, a summary of the information, and it would be sent to the therapist before they actually saw the person.
And so they could already get up to speed more quickly, maybe focus on certain specific issues that had been gleaned by the robot therapist.
That’s how AI machine learning works.
You build a data set, right?
So your information is only as good as the data set.
In time, when you have more data, that’s how you get more accurate information for society as a whole.
Let’s go down to the limitations.
So as part of my ABC show, This Working Life, I actually participated in an early AI biometric mirror.
And it was used at the time to determine if you would be a good fit for a workplace.
So I submitted three photos of myself and it analysed me in terms of psychometric information.
So age, gender, it assessed how emotionally stable or unstable I was, how weird I was, how attractive I was.
And guess what it revealed?
You’re brilliant.
It labeled me as African American.
And that I was weird, yes.
And then I found out that it was because the data set was crowdsourced.
And it was crowdsourced from American Caucasian men who basically volunteered.
And so many of them weren’t working and they were about 18 to 25 years of age.
Oh, interesting.
I mean, that just shows you about the bias in the algorithms which have to power this AI because it’s about the data set.
So can you go deeper on this for us in terms of bias and what those limitations might be?
Sure, absolutely.
And I think that’s really interesting what you said in terms of if it’s self-selection and depending on how that is, I mean, that was interesting in your case if people are not working.
But in other cases, it can often be more educated people or people who can easily communicate.
And depending on what technology, like can you imagine if it was self-selection based on everyone who has a smartwatch?
From that respect, it’s already biased in terms of socioeconomic status.
So in terms of healthcare, and when you’re dealing with things like text and voice, for instance, right?
Different cultures, right?
Some are more animated, some are more reserved.
If you’re trying to assess whether someone has dementia or not based on, there’s a long pause before they respond, well, maybe it’s because they’re not familiar with, if this is English, maybe they’re not as, English is not their first language.
And that has been an issue, as opposed to they have dementia.
So bias is huge, and it’s something you need to be aware of.
You have to address it, do the best you can in terms of having a training data set that’s reflective of the population that you’re trying to capture.
And one thing they say that helps is, the programmers, people who are creating the technology, to the extent that that’s more reflective of the population, so not just the training data set itself, but also the employees, the developers, researchers, all of that is important.
The people who are most likely to pick up on, oh, there is a miss there, this isn’t quite right, is I think people in that category themselves sometimes.
Yeah, good point.
What are some of the other limitations then?
Well, some of the other issues in terms of AI generally are transparency.
Like people are just, in this case, if you’ve got a robot therapist, hopefully in this case, it’s pretty clear that it’s not a human.
But you know what?
It’s actually not always abundantly clear.
Like I would hope in health care, they’re crystal clear about this.
But you can imagine that you might have an adolescent using therapy.
And perhaps they’re more gullible.
So I do advertising work as well.
So for influencers, you have digital influencers, virtual influencers.
And you know what?
It’s not always clear that they’re not humans.
So in terms of transparency, transparency is important in terms of making sure people are aware that this is actually technology, it’s not human.
And the second part of that is not just that people know that AI and machine learning are being used, but also that it’d be helpful sometimes to know how did the technology make that decision?
How did they make the determination that this person has dementia or not?
What data was used?
What algorithm was used?
The more information you have, I guess ultimately the more accuracy you will have.
There’ve been some very public data breaches recently in the news.
What about that exposure in terms of this very sensitive information that is being collected with Emotion AI?
Yes, that is a risk for sure.
I think it goes back to what we said earlier in terms of some people like using Emotion AI instead of a human because they think it’s better from a privacy perspective.
And other people, you don’t necessarily know how your information will be used, both in terms of deliberately and in the case of breaches, right?
So if your personal information is gonna be used for research, it’s not always clear that it is being used.
There are definitely privacy concerns depending on what type of personal information is used.
And certainly it’s different in terms of your text versus your image, right?
And even your voice.
Alice, you’re a legal advisor in this space.
How do you help?
And what is the sort of considerations and legal framework in which something like Emotion AI is operating?
Well, in Canada, we have the proposed AI and Data Act.
So that is not yet legislation.
And when that gets passed, there will be more requirements in terms of transparency, ensuring that any use of AI systems is not discriminatory.
Another thing that is very important in terms of the use of AI in healthcare is currently, whenever you have a drug or a medical device, subject to limited exceptions, you need regulatory approval.
So the whole regulatory framework in terms of AI needs to be modernized.
Let’s say you had some software or medical device.
When you make certain types of significant changes, you need an amendment to your medical device.
But when you’re dealing with AI, and so therefore, the software is kind of like always evolving or when you’re using machine learning, and so it’s always evolving, it’s not possible to be constantly updating it from a regulatory approval perspective.
So the framework itself with Health Canada, that itself needs to be modernized.
You already mentioned Emotion AI and potentially using it in terms of adolescents and helping them identify their moods and emotions.
What are some other uses for Emotion AI?
A ton, actually.
So fraud.
Fraud.
You know, one use is in the insurance industry.
When people are submitting claims, I guess based on how their voice is, you might be able to predict whether they’re more likely to be lying.
What?
You’re lying.
Really?
So that is one use.
Because apparently a fair number of people, not the majority by far, but I mean a material number of people will lie from an insurance perspective.
And this will help in terms of detecting that.
What do you detect?
The voice.
I can’t tell you, otherwise that would be, that would reveal secrets for the insurer.
And then you would try to circumvent that when you’re calling.
Tell me, and then you have to kill me.
Another one which I think is interesting is in education.
So if you have Emotion AI software, which can show that a child is becoming frustrated, for instance, right?
And maybe it’s not even just a child.
Maybe it could be anyone.
You might make the, whatever the activity is, easier.
So they’re less frustrated.
Or if you could tell that they’re getting bored, maybe you make it more challenging.
Give me another one.
I mean, other examples would be just in terms of retail.
If you were to go to a store, for instance, and if you can see the consumer’s mood or their reaction, then maybe you show them on a screen, you show them different, more expensive merchandise or something.
Depending on their mood, you show them something that’s brightly colored or not, or festive or more regular clothing, things like that.
I guess another one that I find pretty interesting is call centers.
So based on someone calling into a call center, based on their voice and other aspects, if they can tell that you’re really upset, maybe they route you right away to a certain person who can better handle that type of emotion.
If the conversation is not going well, maybe they know to escalate faster to the manager or to someone else.
I can see this absolutely exploding.
How busy is it in this area of Emotion AI?
It is a growing area because it’s got so many uses.
The more accurate the information you can get about any individual, about their reaction, response, medical condition, anything like that, it’s useful to companies.
What are you personally most excited about?
How might you use Emotion AI, Alice?
I have a pharmacy background, so I am from healthcare.
And I actually think it’s fantastic.
I think it’s fantastic from a healthcare perspective because I do see the need.
And given that a lot of people just have accessibility issues with medical therapy, anything that can help improve access and accuracy in terms of medical condition, diagnosis or treatment, I think that’s actually really exciting.
Clients can use it with their lawyers and realize lawyers are human too.
This is very true.
This is very true, just like the call center.
I’ll use it with someone.
And if that’s not their day today, then maybe I don’t want to deal with that.
Oh, yes it is.
Oh, no, it’s not.
It’s not.
Buh-bye.
Actually, it could be quite useful, couldn’t it?
It could sort of give you a little alert when it was a good time to call someone because they’re in a better mood, a more receptive mood.
How about that?
Or even your boss, right?
Oh, yes.
Boss Emotion AI.
I love it.
I can absolutely see where this is going.
Thank you so much, Alice.
Thank you, Lisa.
It was a pleasure.
And that was Alice Tseng, Principal at Smart & Biggar in Canada.
Thanks to our producer, Kara Jensen-McKinnon.
This podcast is brought to you by IPH, helping you turn your big ideas into big business.
I’m your host, Lisa Leong.
Bye for now.
Bye now.