Navigating the Changing Landscape of a Values-Based Revenue in Generative AI: The Fair Use Dilemma
In the ever-evolving world of business values, ethics and revenue generation, a remarkable story took centre stage. When Ed Newton-Rex made the bold choice to part ways with a generative AI company due to its stance on copyright, the world took notice, applauding his principled stand.
But what’s truly astonishing is the limited discourse surrounding the concept of ‘fair use.’ The notion that ‘fair use’ fosters creative growth isn’t without merit. Picture a coder or aspiring entrepreneur crafting a ground-breaking model, training it on diverse content sources, all without the luxury of obtaining permissions or the upfront financial means for licensing. Should every innovator be burdened with the prerequisite of paying for content access in advance, it could potentially smother creativity and innovation—an outcome explicitly opposed by the fair use exemption.
But of course, that changes when you start making money, and critically the basis of fair use also changes in an AI world. The principle of fair use is based on what’s called a flexible proportionality test, which considers the purpose of the use, the amount used and the impact on the market of the original. It’s the first and third of these that fundamentally changes with generative AI, because the purpose is (almost certainly) commercial in nature – to make money – by literally generating new content. And it does this by effectively replacing any original source material on which it has been trained, thereby impacting the market of the original.
In the first judgement on fair use in the US in 1841, the judge, Joseph Story, famously said “if he thus cites the most important parts of the work, with a view, not to criticize, but to supersede the use of the original work, and substitute the review for it, such a use will be deemed in law a piracy”. It strikes me that in many cases, this is practically what many generative AI models are doing – superseding the original for commercial benefit. So, Ed Newton-Rex is surely correct when he says “ethically, morally, globally, I hope we’ll all adopt this approach of saying, ‘you need to get permission to do this from the people who wrote it, otherwise, that’s not okay’”. It seems to me that Judge Story would agree with him.
Not every generative AI tool is built without consent and without paying original contributors – Adobe Firefly is an example of one that does pay. But what happens if you don’t already have access to your own licensed asset library (that is, for most people)? The creative development is still valid for fair use.
That’s perhaps where legislators may need to create some mechanism to enable entrepreneurs to reimburse potential contributors; perhaps a contributors’ fund that one subscribes to? There is a related question about jurisdiction – “fair use” is a predominantly Anglo-American/Western legal concept, and may be harder to protect or enforce in some countries. However, with events like the AI Safety Summit bringing international communities of lawmakers together, as well as the recent Executive Order about AI Safety in the US, such a co-ordinated facility or approach feels much more achievable than previously.