Since Chat-GPT and its ability to rapidly generate text-based content rose to prominence in early 2023, AI has become a fixture in tech news and beyond.
From the WGA strike that brought Hollywood to a standstill, to accidents involving self-driving cars, public discourse concerning artificial intelligence often carries as much caution as optimism.
One area in which AI’s impact has yet to be fully understood is deepfakes. With the technology readily available to imitate faces, mannerisms, and voices of famous figures, even the keenest eye can be deceived. This means that faked content can be circulated with the potential to ruin reputations and even influence election results.
Worse still, it could be detrimental to celebrities’ earning power.
As is often the case, the music industry was the canary in the coalmine for piracy enabled by technological advances. In April 2023, the AI-generated track ‘Heart On My Sleeve’ appeared to be a new collaboration between Drake and The Weeknd. It generated global headlines, and a Grammy nomination was rumoured.
More recently, the alarm has been raised in other sectors of the entertainment industry. The estate of George Carlin filed a lawsuit to prevent what was billed as an AI-generated comedy set of the late comedian. As his estate’s legal team put it, the show was “a piece of computer-generated click-bait which detracts from the value of Carlin’s comedic works and harms his reputation. It is a casual theft of a great American artist’s work.”
The opportunity created by the new technology is clear; record labels and movie studios can significantly reduce the cost of content production and also create new material based on the look and sound of beloved deceased artists.
Unsurprisingly, deepfake tech also raises novel legal questions as to what can be ‘owned’ and what is protectable under law.
The precise legal status of image rights is a matter of debate in most jurisdictions, with celebrities’ legal teams using legal concepts such as copyright, passing off, privacy, and defamation to preserve the integrity and value of their personal brand.
HOW MIGHT AI AFFECT BRANDS AND AGENCIES IN THE UAE?
While the uncertainties posed by deepfakes to the music and film industries may represent billion-dollar questions in LA and London, these are relatively nascent sectors in the UAE.
However, with all-year sunshine and a famously photogenic skyline (not to mention a favourable tax environment), Dubai is a major hub for social media influencers.
Globally, this sector has risen in importance to become worth an estimated US$250bn annually, according to Business Insider.
Leading names can charge in excess of AED 100,000 per endorsement post.
In this context, innovative tech solutions may provide opportunities for the UAE’s advertising industry, as brands and agencies might use AI technology to lower the cost of content production and eradicate the need to pay endorsement fees to influencers.
At the same time, influencers will be keen to safeguard their revenue streams and curtail their replacement by digital simulations. UAE law has long protected individual citizens’ rights to preserve the integrity of their public image and reputation.
While image rights may not enjoy the same protection under law as, for example, copyrighted works, there are other legal concepts available.
Defamation is not only a crime under the UAE’s Penal Code but is also expressly covered by Anti-Cybercrime laws (both enacted in 2021, when the UAE federal government issued a raft of best-in-class legislation).
Defamation is defined widely to include ‘insults’ and exposing the subject to ‘punishment or contempt’, giving authorities significant latitude in interpretation. This might well include where a person is purported to have endorsed a product or service with which they have no actual connection; if a faked endorsement is seriously off-brand, the effects could be catastrophic.
In addition, anti-cybercrime laws criminalise taking photographs and making footage of citizens without permission, consistent with cultural norms that emphasise the importance of an individual’s right to privacy.
Therefore, it is difficult to conceive that such a legislative environment would permit generators of deepfake content to act with impunity in relation to others’ images.
Agencies in particular should exercise caution if using digitally-created influencers. Arguably, even if there is no deliberate attempt to confuse the audience, any similarity of the avatar to UAE residents may lead to an investigation for defamation.
While legislative bodies across the globe rush to keep up with technical advances, it is perhaps reassuring that principles established in the era of print hold true in the face of AI.