Digital Essays 2017: Tech agencies and companies: menaces, vigilantes or heroes? – by Elie Bassil, chief strategy officer, Mirum MEA

Dr Octopus, clad in Spider-Man’s suit, established his own justice brand by using spider-bots that snooped on the people of New York with the aim of reducing crime. The Green Goblin, his main nemesis, ends up hacking the bots to take control of the city and allow his followers to bypass detection.

Tech agencies are a disruptive, heroic force providing better solutions to human problems: human connections, privacy, freedom and popular movements (content platforms and technology apps); better financial services (fintech), personalisation, prediction and automation (artificial intelligence); low transaction fees, mobile and untaxed payments (blockchain and cryptocurrencies); smart cities and smart systems (national infrastructure agencies); and the internet (search tools).

Yet, could those same solutions be sowing the seeds of threats? Could tech agencies become a menace?

Just as The Superior Spider-Man plants hackable spider-bots across cities, tech agencies and companies spread weaponisable tools that can be repurposed to inflict harm. Terrorists, violent online extremists, drug lords, money launderers and abusers are constantly adapting to new technology, creating a safe haven for illegal activity. Here are six ways tech can be abused:

Online propaganda: Content platforms and technology apps can make for efficient communication tools to recruit and engage potential terrorists, spread violent extremist propaganda, disseminate DIY videos on how to cause casualties, and facilitate terrorist attack cycles from target selection to attack exploitation, all done through data science and encryption.

Illegal financing: Fintech agencies can be a hub for terrorism financing and money laundering. An online lender facilitated a loan of $28,000 to the terrorists who killed 14 people in San Bernardino in 2015.

Weaponised robots: Artificial intelligence agencies could lose control over the products they themselves created when a product is reprogrammed to do harm and turned into an AI weapon (flying drones or self-driving cars carrying guns) or self-taught robots that decide to rule the world. This reminds us of the much-debated Elon Musk rhetoric: “We are summoning the demon. You know all those stories where there’s the guy with the pentagram and the holy water and he’s like, yeah, he’s sure he can control the demon? Doesn’t work out.”

Anonymous transactions: These can be a means to fund illegal activities. It is possible for terrorists or drug lords to use blockchain as an end-run around traditional financial payments to prevent traceability. The Al-Khilafah Aridat: The Caliphate Has Returned blog discussed how bitcoins can be used to fund
the caliphate.

Hacking: National infrastruc- ture solutions provided by agencies can be incapacitated in a way that would bring about catastrophic effects on security, safety, and health. In May 2017, a cyber-attack hit 40 NHS trusts in England and Scotland, compromising networks and emergency patients’ safety.

Detection bypass: Search tools give access to the dark web, a network of untraceable websites, supposedly devised by the US Government to allow spies to exchange information anony- mously. Many people – including terrorists and Internet abusers – use the dark web to mask their identity. Defeating terrorists on the battlefield is no longer enough, with talk of virtual caliphates within which terrorists can self-organise and form movements, similar to Hacking Anonymous.

The absence of regulatory guidelines is one of the main reasons behind technology abuse. Passing treaties is difficult, due to disagreements, misalignment on semantics (for instance, the absence of a unified definition of “terrorism”) and tech freedom advocates.

In the absence of a convincing rule of law, The Superior Spider-Man took the initiative to spread spider-bots across New York to protect the city. Similarly, in the absence of such technology regulations and law enforcement, tech agencies and companies are becoming vigilantes, working their subjective views into their terms of service.

Telegram’s CEO, Pavel Durov, said that the app safeguards “privacy and defends freedom of speech” by refusing to hand over encryption keys. While it hosts 100 million users who enjoy the privacy it offers, Telegram is considered by many the app of choice for terror groups.

Following the selling of $150,000 worth of political ads linked to Russian troll accounts during the US presidential elections, Facebook CEO Mark Zuckerberg announced transparency measures that require political advertisers to disclose the source of funding. This decision is not law-enforced; it is based on Facebook’s deliberate step towards self-regulation, faced with government pressure (for example, Germany’s law penalising social media companies with €50m fines for failing to take down hate speech within 24 hours).

Government pressure, internet freedom advocates and constant threats from technology abusers are putting tech agencies and companies at the centre of unchartered crossroads. Machines will become smarter than humans, making content difficult to control. Technology is power, and tech companies and agencies have the responsibility to remain wiser, by coordinating with government (law enforcement and states) and civil society (human rights’ advocates, academics and NGOs). Those should converge and draft clear guidelines on how to combat threats while preserving human rights.

After all, every hero needs a community to thrive, because “With great power comes great responsibility”: a responsibility to remain wiser than machines. There will always be anti-heroes who in turn drive the invention of creative antidotes – which is the basic imperative for ongoing evolution.