fbpx
CreativeFeaturedOpinionPR

The real ChatGPT crisis: less about the ‘doing’, more about the ‘thinking’

Amel Osman explains why instead of wasting time on AI restrictions, companies must train their teams to think critically about content strategy, audience engagement, and cultural and contextual alignment.

By Amel Osman, Communications and Reputation Advisor on ChatGPTBy Amel Osman, Communications and Reputation Advisor

Let’s talk about the AI paranoia running rampant in boardrooms right now. The latest corporate trend? Restricting ChatGPT. Locking it down. Handing out stern warnings about its use. Apparently, the best way to deal with technological advancement is to clamp down on it entirely. If that sounds like an overreaction, that’s because it is.

ChatGPT is not the villain here. The real problem? Leaders and teams who don’t know how to think about AI, only how to fear it.

In an industry where teams are constantly drowning in deadlines, the real crisis isn’t AI it’s the lack of time to think. ChatGPT isn’t replacing creativity; it’s giving teams back the space to focus on strategy, context, and insight the very things that make content truly effective.

The obsession with how teams are using AI has completely overshadowed the much bigger issue: how teams are thinking about content, audiences, and the strategic, cultural, and contextual relevance of what they create.

Because let’s be real bad content was being churned out long before ChatGPT entered the scene. The difference now is that companies have an opportunity to make their teams smarter, faster, and sharper if only they’d stop shutting down the very tool that could help them get there.

The real AI playbook: context first, content second

A well-trained team doesn’t need to lock AI out; they need to master it. And that means structuring its use around three core principles:

  1. Stakeholder relevance: Before generating any content, teams need to ask: Who are we trying to reach? What are their biggest challenges? What’s keeping them up at night? Content only works if it directly addresses a stakeholder’s pain points and positions the brand as a problem-solver. If your AI-driven content isn’t doing this, it’s not an AI issue it’s a thinking issue.
  2. Market context, cultural awareness and national alignment: Content can’t exist in a vacuum. Teams need to factor in market conditions, economic trends, and cultural sensitivities. Especially in regions where reputation is closely tied to national priorities, messaging that ignores cultural and contextual relevance risks missing the mark entirely. Smart AI use means incorporating these elements into ChatGPT briefings, ensuring that output reflects not just the business landscape, but also the cultural dynamics that shape it.
  3. AI as a collaborator, not a shortcut – The worst AI mistake teams make? Treating ChatGPT like a vending machine—plugging in a prompt and expecting a masterpiece in return. AI should be briefed the way you’d brief a human writer: with strategic, cultural, and contextual guidance, guardrails, and a clear sense of direction.

What ChatGPT should NOT be doing

Since PR and communications leaders love a blacklist, here’s a more useful one: the three things you should be telling ChatGPT not to do.

  1. Don’t over-simplify or generalise: AI-generated content often defaults to generic, globally neutral messaging. In markets where cultural nuance is everything, teams need to instruct AI to maintain local relevance, avoiding broad strokes that dilute the message.
  2. Don’t prioritise trends over strategy: ChatGPT loves throwing in trending buzzwords, but what’s popular isn’t always what’s relevant. Teams should specify not to chase trends blindly but instead focus on positioning their brand within industry and policy shifts that matter.
  3. Don’t ignore cultural and regulatory boundaries: AI isn’t instinctively aware of local sensitivities. Teams must set clear parameters to ensure content respects cultural norms and adheres to industry regulations—especially in highly regulated sectors like finance, healthcare, and government affairs.

The bottom line: restricting AI won’t help – teaching teams to use it wisely will

Leaders curbing ChatGPT’s use are missing the point entirely. The problem isn’t the tool it’s the thinking behind its use. Instead of wasting time on AI restrictions, companies should be investing in frameworks that train their teams to think critically about content strategy, audience engagement, and cultural and contextual alignment.

The real ChatGPT risk isn’t what it’s writing. It’s the fact that all these restrictions are distracting from the much bigger issue: We are too focused on how teams are doing the work, instead of how they are thinking about it.

And that’s exactly why companies should be focusing on equipping their teams with the right skills. They need to work smarter, save time, and finally shift their focus from relentless execution to meaningful strategy, creative thinking, and the industry’s often-overlooked value: building relationships and seizing new opportunities.

By Amel Osman, Communications and Reputation Advisor