fbpx
Essays

The moral dilemma for digital and data

initiative-alister-03

We all know the power of digital advertising, which is why it’s eating its way through most other mediums.  On a global level, digital spending is about to overtake TV to become the most invested-in media, a trend which shows no sign of abating.

Digital’s ability to deliver a relevant message at the moment is unparalleled. As a new father, I’m learning from personal experience, friends and family and my good friend the internet. The number of questions you’re faced with from day one is endless and the realisation that there is a whole world of questions on a topic I knew little about is inspiring. At 3am, when friends and family aren’t available, the internet is your best friend.

But, in living a huge part of our lives online, we leave a trail of personal and private information.  This can be used to the collective benefit of society or expose individuals to advertisers who can abuse such information. My question is whether advertisers’ morals have matured.

Digital companies now hold a level of information about each user that is sometimes considered untenable, with advertisers clambering over one another to outbid each other to target the right user at the right moment. Chances are that, unbeknownst to them, users are continually sharing these moments with the world of advertising.

So, who sets the rules as to the information advertisers can use, how personal can the data get, and how young is too young to target a potential customer? From an age perspective, Facebook sets this at 18 years old but has recently launched Lifestage, specifically designed for teenagers. To be clear, no advertising is available, but what does that mean for users’ data in the years to come? Google’s policy towards advertising to children is that it is prohibited before the age of 13, although there are ads on YouTube Kids, which are ‘family friendly’. Again, who sets the rules around ‘family friendly’ content and the type of products that can be advertised? Each company sets its policy as it feels right. And what else are they supposed to do?

What about vulnerable audiences, such as children? Although illegal in some countries, it is simple enough to target them, irrelevant of how good or bad the product is for the individual. Less scrupulous technologies or content providers will bend the rules on how they use data and flex best practice as to whether they allow the targeting of vulnerable audiences, who don’t necessarily understand the how or why. There is a huge amount of quality, beneficial, educational and useful content and technology online for children. Who is setting the rules as to how young is too young, and at what point should users be protected by law?

What about personal or private information? Again, where do you draw the line between useful and relevant messaging, and abusing personal information? The medical industry holds many of the conundrums, with potential solutions to private personal circumstances, but is it morally right that we advertisers can see who is likely suffering from private medical conditions, especially if the individual isn’t aware of how their data is being used?

What about the actual use of digital devices themselves? With apps and websites designed to achieve a sticky user experience, unsurprisingly children are becoming addicted to the internet or developing back and neck pain through staring at their mobile all day long (unlocking it an average of 80 times). What else do you do that it so important you do it 80 times a day?

Whose responsibility is this – global bodies, governments and tech companies collecting the data and developing the software to target their users, or brands who are keen to reach their target market? Each group holds some of this responsibility. Or should the individual be held accountable? If individuals are not aware of how and why their data is being used, and how they can prevent it being used against their will, then can you blame them?

Considering it’s a communications industry, overall we are not so good at communicating some of the difficult issues.

Collectively the industry should communicate with all users (also known as real humans with hearts and minds) to understand how their data is used and why they’re being targeted, and leave it up to their own good judgement as to whether they want to set advertising parameters (tag trackers, ad blockers and filters) to prevent their data being misused.

Without this, ignorance is bliss and data is misused again and again. Perhaps technology created to protect the user will become standard practice. In the future it is possible that consumers may not be willing to spend their hard-earned money on a brand that goes against their moral code.

The EU’s Cookie Law was action on a legislative scale across the EU, meaning the websites that serve EU users have to inform those users about the data that’s being collected and offer them the right to refuse cookies. There will always be ways around this, but it’s the right step with legal backing that needs to happen to enable digital content, development and advertising to flourish in a positive environment benefiting all parties.

This type of regulation needs to continue. Otherwise there will eventually be a much bigger and much more negative backlash, which will be bad for business for everyone involved.

What’s that I read? … WhatsApp end-to-end data encryption is now sharing data with Facebook for advertising purposes? Quick, let’s get it on the plan! But should we so blindly?


Alistair Burton, Regional digital director at Initiative