Are you using data and AI in the right way?

As you know by now, I speak a lot about ethics in data. This has been an area that has grown over the last few years, but perhaps not as much as I would have expected, given the take-up of data and AI in business.  Using data and AI in the right way – in a respectful way for the benefit of humans – is clearly vital.

However, over the last month or so, the landscape around usage of data has shifted, and I expect ethics to take greater prominence in businesses using data from now on, and for this to be a key part in your data maturity model.

So, what’s changed? Firstly, at the end of March, there was an open letter from AI business leaders asking for all AI development to be stopped (at least for a time, on “giant experiments”), for the sake of humanity. The fact that this letter came from those who already have developed AI is slightly problematic; it sounds like an attempt to put the barriers up to new entrants, from those who have already made fortunes, careers and commercial advantage by doing exactly what they now say others shouldn’t do. It is also to a degree trying to put the genie back in the bottle, and this genie is now very much out. 

However, that is not to say they don’t have a point, and the letter is not just signed by heads of big business, but professors, philosophers and academic researchers, so this is not an exercise in self-interest. As the letter itself notes, “advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources”.

The letter is significant because it specifically asks for all AI labs working on giant AI experiments to pause development for at least 6 months, in order to develop “a set of shared safety protocols” that can be “rigorously audited…by outside experts”. In other words, to sign up to a shared set of ethics on how AI should be used, and for this to be held to account independently. The letter seeks to put ethics at the very heart of AI development.

This clearly will not affect most businesses, and indeed may only include a few AI labs, government programmes and large companies specifically. But over time, governments will start to legislate to ensure AI is used correctly, and so the idea of ethical auditing will become more mainstream. The EU is leading the way with its AI Act, so expect this to be a direction of travel.

The second thing that’s changed is a very public, and very unethical example of using Generative AI. This was a conscious attempt by humans to use Generative AI to do something unethical – an interview with Michael Schumacher, the former F1 racing champion, who has not been seen in public since a terrible head injury a decade ago. This was published as a “real” exclusive, but had been generated through AI.  The backlash has been swift – the publisher has apologised and the editor in chief sacked. This might not be the last example of using the new exciting technologies of Generative AI in unethical ways.

So, now, if you use AI in the wrong way, not only could it replace you, but it can also get you sacked. Ensuring that you use AI ethically and in a humane way has therefore never been more important; you don’t have to be running giant experiments to care about data ethics.  

If you want to speak to us about this, including running a data assessment to consider how you are using data and what governance and safeguards you have in place, please get in touch.

This website uses cookies

We use cookies to improve your experience and to provide us with insight into how people use our website.

To find out more, read our cookie policy.