AI is a trending topic with no sign of slowing down, particularly when considering how it might influence the way we work both now and in the future. For many businesses, AI is not new, and is a tool that continues to be implemented to produce more efficient and streamlined work, for the benefit of the business and its customer base.
In this episode of From Idea to Intellectual Property, host Lisa Leong is joined by Spruson & Ferguson Lawyers Principal Mark Vincent who outlines the ways in which businesses use AI for the benefit of their customers, and the IP considerations practitioners must think about when businesses implement AI systems – who owns the data, what can they do with it, and who owns it at its output?
Mark draws on his deep knowledge of AI systems to provide cutting edge insight on this rapidly growing and evolving technology.
For more insights on the importance of IP in turning ideas into commercial realities, be sure to follow From Idea to Intellectual Property.
To be notified of when future episodes drop, follow us on Apple, Spotify or your preferred podcast platform.
Listen to the full episode here:
-
Transcript
Have you ever spent hours on the phone listening to hold music, waiting for customer service?
Well, it turns out, the rise of AI in business might spell the end for that pesky hold music and mark the beginning of a brave new world of better customer service.
We’re sorry, you have reached a number that has been disconnected or is no longer in service.
If you feel you have reached this recording in error, please check the number and try your call again.
Oh, what?
I’ve been holding on for ages.
When are they going to pick up?
Hello, I’m Lisa Leong, and welcome to season two of From Idea to Intellectual Property, a podcast about today’s big ideas and the IP considerations behind them.
In the news and all around is the talk about ChatGPT.
But the question that’s really emerging is, how might AI help me with my business, especially in relation to customer service?
Mark Vincent is principal at Spruson & Ferguson Lawyers.
Mark, tell it to us straight, is AI in the workplace a game changer?
I think we’re all using AI every day.
We’re just not appreciating that AI tools are in use and have been for some time amongst a lot of businesses.
What is being used to do is significantly increase the productivity of businesses.
So you’re doing more with less, but also there’s a real potential to change the offering for consumers, to make the consumer experience better.
And some big players like Qantas are already using this tech.
And we’ve all been there.
We’re waiting for our Qantas flight at the airport.
Got the coffee.
Just seen that the flight is delayed.
I make a call to customer service and I’m met with a jingle and told to hold the line.
Mark, how is that experience going to change with this new tech?
So an example is Qantas’ ambitious plans to leverage AI throughout all parts of their business over the next decade.
And I think in part, they think they need to keep up with every other airline, but clearly listening to them and looking at their documents, they want to lead the world in enhancing the customer experience and in improving their airline through use of AI.
There are some examples of use of AI for flight optimization and ways of making travel more efficient for airlines and their use of fuel.
They can determine arrival times much better and predict delays much more accurately, but there are customer focused areas that will all benefit from coming into contact with.
And so when you are dealing with Qantas’ helpline, if there are no people involved, you don’t need to wait for people.
You can immediately talk to Qantas through something like an AI chat.
It will know a lot about you.
You won’t need to tell it all of the information surrounding you.
Your identification will be more immediate.
Your needs will be more understood.
And you can have much more of a tailored service.
Examples might be it anticipates your question based on your past history, based on your future plans, based on the travel that you have booked.
It can get back to you proactively to tell you if there’s disruptions to your flight, if you can delay your trip to the airport, if you might need to get hold of them for some sort of rescheduling.
When you get to the airport, there’ll be increased use of biometric and other identification systems so that an image could be taken of your bag.
You drop it off.
Qantas knows exactly where it needs to go.
It knows exactly who you are.
You stroll through the airport.
You get on your flight.
Much less use of documentation and interaction with screens.
So I think all of those improvements that I talked about depend on this massively increased computing power that we can leverage and AI technologies that make these things possible in a way they weren’t possible previously.
This all sounds incredibly useful as a customer.
What are some of the red flags that we should be looking for at this moment in time?
Given also that there are some calls for a moratorium and people are starting to get worried about, especially with biometric ways of security, what could be the implications here?
Yeah, so some of the implications which are worrying people are macro concerns that aren’t going to arise from, say, Qantas’ use of AI.
It’s more macro concerns about computers developing the ability to influence people and control our lives in unanticipated ways.
You know, there’s a fear that we need to pause and think about things before computers take over the world and many doomsday scenarios coming out in that sense.
But there are very real concerns that will apply to every corporation when it implements AI.
There’s a bit of a sense that it’s the Wild West when you implement AI in your organization, maybe from public media or other sources.
But that’s not really true.
There’s an existing regulatory framework that does apply to everything you do around AI.
And some of the risks of AI are connected with privacy.
And I know we mentioned that biometric identification, incredibly powerful, but incredible risks to privacy.
There’s also the ChatGPT or learning models of AI.
And they underpin some of the most effective online customer service and chat services.
Because in order to be realistic, in order to be intuitive, in order to give you an experience that you’ll enjoy, they have to be trained on masses of data.
Now, it is for all practical purposes, not possible to train these models on the volumes of data necessary without sweeping up data, which a company might not own from an IP perspective, and certainly might contain personal information, which gives rise to privacy risks again.
So privacy is a big one, but there are other risks if your AI project’s not monitored properly.
So as with any new technology, companies need to take their time to make sure they understand the technology, understand the risks, train their people properly, and have the right governance structures inside the organization to make sure that they’re going about their AI projects in the right way.
As you’ve worked with businesses to start looking at how to use AI, what have you found to be the most useful way to break things down so that you can look at how AI might be, might work with business processes, Mark?
Yeah, so I think there are a lot of different models around the world now for use of AI within businesses.
And some of those are guidelines and industry agreed codes of conduct, and increasingly there are regulations around the world.
And with international businesses, they do have to have regard to regulations around the world.
And then there’s also standards, technical standards that are deployed by some clients around increasingly around AI practices, and they take the time to comply with standards.
And that’s very useful.
But for a lot of organizations that are new to AI projects, the conversation could be a bit more basic than that.
It could be, what are the risks?
And that involves looking at the actual project they’re using, examining the actual data that’s involved.
How do they access the data?
Do they have rights to access the data?
How are they going to deploy it?
And lining it up against things like our Australian Privacy Principles to make sure that there’s enough thought going into the project before they deploy it.
So a risk assessment is a good one.
Is there a surprising risk exposure, particularly in relation to the Privacy Principles that you’re seeing happening in businesses at the moment when they start using AI?
Yeah, I think there’s quite a widespread use of ChatGPT in businesses.
And when you look through the process ChatGPT uses to generate content, it’s well documented that you can get personal information included in the results that you get from ChatGPT because of the way it’s trained.
But also the terms and conditions actually state that your information and your queries that you put into ChatGPT are taken on board by the engine and used to train it for future queries for everyone.
So you can unwittingly share your own organization’s personal information.
Oh, interesting, particularly maybe if it’s a strategic question, Mark.
Exactly.
There are ways in which you can reveal confidential information of an organization through use of ChatGPT.
Now, there are other generative AI models that you can lock down in a more specific way to your organization.
And ChatGPT has some corporate plans and some frameworks that they put around offerings that you have to dig a bit deeper to find.
But if you use the off-the-shelf product and read the off-the-shelf terms, there’s nothing confidential about the information you put into it, nothing private about the information you put into it.
In fact, you warrant that that is the case, that you’re not disclosing confidential or private information because ChatGPT will take it, soak it up and use it.
And any other big exposures that you’re seeing with businesses using AI, Mark?
I think there’s the reputational risk for businesses if they use AI to help make decisions and those decisions seem to treat different customers differently, then there’s a risk to their reputation.
Also, if customers are surprised by the use of AI and the information that’s been collected, then there’s a risk that that could cause adverse public comment on the customer.
So, there are a series of ethical principles that are promulgated around the world.
There’s about 90 that I’ve counted recently different sets of ethical principles that companies should use for their AI projects.
A lot of that is designed around setting the expectations of stakeholders so that there are none of these surprises.
Where is IP around data at the moment?
I remember in the 90s, it was all about whether or not there was an IP in a database.
And it was a complex then.
So where do we sit now with all of the data that’s being accumulated, particularly by businesses in the world?
A lot of data is big data, unstructured data, not amenable to protection by IP.
It’s not a work created by individual authors.
It doesn’t look like the type of work that would be protected by the Copyright Act and it’s not.
So the IP which comes in to play to protect your data will be things like confidential information.
So it’s internal information of the company which is secured and treated as a trade secret.
But beyond IP, you’ll have contracts with parties like API agreements, which is a common way of sharing data between organizations, real time data, which is important to these projects.
So contracts come to the fore, confidential information comes to the fore, and then privacy comes to the fore in assessing these assets.
So yes, it is an intangible asset, like other forms of IP, but it has legal constraints which come from multiple areas that allow you to understand it and commercialize it.
And that’s the thing.
I mean, the data in itself, unless you analyze it or do something, whether it’s not usually that helpful, and so you might even use third-party AI to help you analyze the data and then create some trends around it.
Is there any complication with who owns the analysis of the data?
Well, there is, much like the example of ChatGPT I gave earlier.
ChatGPT owns the interactions with their system under their standard terms.
So when you have analysis within an organization, you need to be really clear on who owns the output, what they can do with it.
And you also might use third-party data to come in and help create those insights.
So do you still own the output?
What’s the terms of your license to use of the third-party data?
Is it sufficient to get the value out of it that you seek?
So you’re looking at data-sharing agreements now as entered every day by large corporations.
And you need to assess whether the data-sharing agreements meet the commercial needs of the project in question.
As you mentioned, it’s super complicated, Mark.
Do you feel like IP and our IP regulation and frameworks are keeping up?
IP is always dealing with the latest innovations, the latest developments in technology.
It’s always a question of working out what of our existing laws stretch to fit new scenarios in our line of work dealing with technology change.
The AI situation is a good example of where the existing frameworks, they do apply to everything someone might do, a corporation might do with AI.
But they could hold companies back in a way which makes Australians not competitive because they don’t have the clarity, they don’t have the freedom that other jurisdictions offer in order to press ahead with some of these projects.
And we really have to be careful to make sure that as a country, identifying impediments to use of beneficial technology and impediments to the growth of our own industries and making sure we facilitate that growth.
And I think there is a need for more guidance to corporations and changes to some of our regulations to deal with the new technology in the case of AI.
In the case of data, there is a case for a database-specific protection, which they have in Europe.
We don’t have that in Australia.
There’s no database right.
It’s a question of how many of these things get enough profile to meet the regulatory agenda of Parliament.
Singapore is one of the countries really leading regulation around AI.
Tell us a bit about that and what they’re doing.
Yeah, so the Singaporean government has laid out a much more comprehensive path for rollout of AI projects than we’ve seen yet from Australian regulators.
And a number of their regulators have got together and put together a model artificial intelligence governance framework.
They’re already into the second edition of that.
It involves the Privacy Commission equivalent in Singapore and also the Media Development Authority.
And the Singaporean government was keen to make sure that their companies based in Singapore can take advantage of all of the benefits of AI and manage risk within guardrails that they set up.
And so the model framework does guide companies on their internal policies, governance, risk assessment, processes that they should take.
It has a set of ethical principles that can apply.
And importantly, it has a database of case studies of companies adopting AI and how they dealt with and managed the risks.
So that’s really useful.
Where I find it’s particularly useful for companies managing risk is that the Privacy Commissioner says, look, if you follow these guidelines, then that’s relevant to any assessment I might make under the Privacy Act.
It’s likely that the Privacy Commissioner in Singapore will focus somewhere else if they’re looking for privacy breaches to make a public example of rather than someone that has followed their guidelines in this area.
Thinking then about the Singaporean framework, do you think something like that would be a benefit to adopt in Australia?
Absolutely.
I think our regulators coming together and offering something like the Singaporean model and telling companies, if you follow this, you are effectively living up to industry best practice in this space and your risk of regulatory problems is much lower.
It also avoids Australian companies having to pick which of the 90 ethical standards they apply, which approach of which regulator around the world do they want to use, which model of governance.
I think having a well thought through leadership on the use of AI from regulators is essential.
Mark, absolutely fascinating area and I can see why you love it so much because it would keep you on your toes.
So thank you, Mark.
And that was Mark Vincent, Principal with Spruson & Ferguson Lawyers.
Thanks to our producer, Kara Jensen McKinnon.
This podcast is brought to you by IPH, helping you turn your big ideas into big business.
I’m your host, Lisa Leong.