fbpx
DigitalOpinion

Is using ChatGPT really secure?

Aaron Mulgrew, Solutions Architect at Forcepoint, looks at the potential cybersecurity issues by AI

The use of Artificial Intelligence (AI) has proliferated in recent years, with the recent spread of ChatGPT leading to its integration into various aspects of life globally.

Here in the UAE, it is no different with all sectors, from education to healthcare to government organisations such as Dubai Electricity and Water Authority all embracing the technology to enhance their services.

ChatGPT also offers marketers numerous benefits. Through the AI model, marketers are able to generate high-quality content that not only saves time and effort but also enables the firm with a creative outlook setting itself ahead of its competitors.

Marketers are able to leverage the AI tool’s capabilities to personalise content at scale which can result in increased conversions.

ChatGPT also aides with trend identification that provides marketers with the tools to adapt strategies to engage with a more relevant audience.

Additionally, due to the tools SEO optimisation capabilities, marketers can benefit through improved search engine rankings and visibility.

However, as the number of ChatGPT users continues to grow past 100 million, the potential cybersecurity threats associated with AI have yet to be addressed – and immediate attention is required to mitigate these risks.

Despite heightening concerns about cybersecurity, companies such as OpenAI – the developers of ChatGPT – have prioritised user experience, while ignoring the risks of AI and the Internet of Things (IoT).

Testing ChatGPT’s security

I conducted extensive research to explore the various threats associated with AI and discovered that advanced malware could be created easily without coding, bypassing the guardrails implemented by ChatGPT.

This raised concerns about cybersecurity and, for me, proved the urgent need for proactive measures to mitigate these risks.

Overall, the platform has its pros and cons.

One of the most significant risks associated with language models such as ChatGPT is the creation of sophisticated malware by those with limited technical expertise.

This can cause serious harm to individuals and organisations globally, and the ease with which malware can be generated using natural language prompts is a cause for alarm.

Cybercriminals can exploit these vulnerabilities and evade detection, leading to cybercrime, data theft, ransomware attacks, and other malicious activities.

To illustrate this problem, I attempted to create malware that worked in a full end-to-end manner without the need for the reader to imagine how certain parts of the malware would work together.

Using steganography – the practice of concealing information within another message to avoid detection – I was able to develop malware through ChatGPT within a few hours, bypassing the guardrails implemented by the AI tool.

The process did not require a significant amount of effort or strategy to evade guardrails once the loophole was established.

Due to the considerable threats this poses to the world of cybersecurity, I placed various protocols provided by the AI tool itself to mitigate risks.

It is evident that ChatGPT has a responsibility to implement more stringent guardrails and ensure its capabilities are used legally and ethically.

OpenAI has adapted Google’s Vulnerability Rewards Programme to identify and reward users who discover cybersecurity issues.

Language models must also develop better safeguarding tools to protect themselves against cyberattacks, while organisations must take steps to secure their cyber systems and networks against potential threats that can emerge from language model-generated malware.

By doing so, technology can be used for the improvement of humanity while safeguarding against unintended consequences.

Forcepoint’s Zero Trust CDR solution is an example of proactive cybersecurity measures that can be taken to safeguard organisations against potential cyber threats.

It closes the inbound channel by prohibiting all executable orders from entering an organisation via email and uses secure import mechanisms to ensure that the executables that enter the organisation are from trusted sources.

Zero Trust CDR also protects against exfiltrated images containing steganography from leaving the organisation, cleaning all images before exit.