The Challenge of our Generation – Artificial Wisdom
Douglas Adams, in his Dirk Gently’s Holistic Detective Agency, imagined a world where people could buy an Electric Monk – a labour-saving device that does your thinking for you, in the same way that a washing machine saves you from having to do the washing, or a microwave from having to cook. 35 years after the book was published, ChatGPT have, in effect, just launched Adams’ Electric Monk.
But ChatGPT has also polarized opinion on the progress of AI. There are the believers, who embrace the newness almost unconditionally, and there are the concerned, who recognize that there are potential issues with this leap forward. But the concerned voices are not actually dissenters – Luddites – who want to smash the new technology or turn back time, they are raising issues or criticisms that are legitimate in this new world. Take, for example, this fascinating article by the author Lionel Shriver on the impact on creative writing – AI has mastered the art of terrible writing | Comment | The Times. For this next phase of AI development to be successful, it’s important to address these.
In essence, I think the challenge can be summarised as the need for Wisdom in the Age of AI – how to identify what really matters, and what is really real or authentic, in the twenty-first century.
To consider this, it’s useful to have a definition. The definition of wisdom is:
“having the knowledge of what is true or right coupled with just judgement as to action”.
So, let’s take an example – perhaps the most immediate example, given what everyone on LinkedIn is talking about. Now that ChatGPT can answer just about any question (provided the answer was available before 2021!) in a discursive, essay-style format, how can you tell who genuinely wrote anything? Already, one school has changed its marking system, so English essays are no longer accepted as homework, because a ChatGPT response got an A*, so there are no guarantees that anyone actually did the work. And given its Generative AI, you can tweak what the response is so it fits your tone and style of writing. The technology has been able to learn so much that it’s actually able to go beyond plagiarizing and adapting to an individual request. It has, in other words, achieved mastery of its subject. And its specialist subject? Everything.
Furthermore, the challenge is perhaps bigger in the field of business. If every blog or LinkedIn post can now be programmed, how do you express your brand’s individuality? As a customer, how do you spot what is really different?
To quote the great Groove Armada,
“if everybody looked the same, we’d be tired of looking at each other”.
I don’t think Groove Armada set out to be prescient, but welcome to Sleep Central, brought to you by ChatGPT (other Generative AI tools are available).
The argument goes that, in a world where we are overloaded with information, wisdom is hard. For an entire generation, we have been schooled in the dopamine-driven information economy, a micro-risk and micro-reward knowledge sharing environment. The distraction techniques of advertising and instant dopamine hits of gaming have been turned into some of the most powerful business models ever known, leading to a population of instant information addicts, where the human mind has become the product, for companies to influence in ways that people, as individuals, are not ready for. So, goes the argument, what chance have we, and wisdom, got?
I would say, quite a lot!
Let’s start with a response to Chat GPT. I love the fact that a solution to the issue of challenge of ChatGPT being used by school pupils and students is already emerging; a program, called GPTZero, has been written by a professor to help spot AI-generated answers.
This is a great response for two reasons. Firstly, it addresses the Luddite argument; this is not turning the clock back, but embracing and using the new technology to enforce its own limitations. And secondly, because it points to the importance of ethics and wisdom. And I think we, as a data industry, are in the ideal position to inject this into the ecosystem for the future. Indeed, I think it’s our responsibility to do so.
What do we, as data practitioners, and so at least stakeholders and initial custodians of this data-driven environment, do about it? Clearly, I don’t think this is an overnight fix; this will evolve over months and years to come. But I do have some starting points.
Well, firstly, we have to be clear – the believers – Bill Gates, Elon Musk, and indeed the late Douglas Adams – are right. AI will deliver massive benefits to society that we haven’t even considered yet, particularly in operational efficiency, but also in areas like healthcare and medicine. But it needs to be done in the right way.
Secondly, the point about “just judgement as to action” highlights the importance of decision-makers – those will choose the actions – to understand the data and data systems that they are using.
But I think the fundamental “wisdom” approach will remain the same, and the right one. I think this could be summarised as a holistic approach to AI and data.
It’s important to consider the entire environment or ecosystem that a data-driven project delivers, not just the business benefit in terms of revenue uplift. Is the business benefit large enough to justify the entire impact? That’s not a cost benefit analysis, but a holistic one? What is the energy cost, and carbon impact, of creating more algorithms? What will the impact be on customers? What is the impact on workers? Does this save them time? Or does this mean that we need to start to retrain people for different, ideally higher-value tasks?
So, business cases for AI systems need to be adapted to become more holistic. Perhaps with the exception of some B Corps, business cases for data projects remain focused on specific business (normally revenue uplift) impact. Right now, no-one can be criticised for that. But I think adapting to a holistic approach in business is the start of wisdom in the age of AI.
And this will become inevitable. As financial institutions insist on ethical and environmentally sustainable businesses and investments, this approach will become more normal. Right now, those who will follow this approach – B Corps being the obvious example – will have first mover advantage.
The UK is leading the push for data ethics to be put into action, as part of the Government’s AI strategy. Having partnerships and connections with some of the UK’s leading data ethics organisations, like the Turing Institute, the CDEI and others, we at Station10 can help link your data projects to the principles of ethical use of AI.