AI in Internal Comms: when to use it and when to think twice
- Karen Dempster
- Feb 12
- 3 min read
A powerful tool when used wisely, but a risk when used blindly - here’s how to get the balance right.
AI is transforming the way we work, offering faster content creation, sharper insights, and more time to focus on the human side of communication. But let’s be honest – it’s not a silver bullet. Used well, it enhances our work. Used blindly, it can damage trust, clarity, and even inclusion.
The Institute of Internal Communication’s AI Ethics Charter highlights a key truth: AI should support, not replace, human connection.
At @Fit2Communicate, we want to help communications professionals make the best use of AI while avoiding common pitfalls. Here’s a practical guide to knowing when AI can help and when human judgement should take the lead.
When AI can add value
💡 Generating ideas and brainstorming
If you need a fresh perspective, AI can help spark new ideas, structure outlines, or suggest alternative approaches.
📝 Summarising reports or meetings
AI can quickly condense long reports or meeting notes, making it easier to extract key themes and insights.
📢 Drafting simple, factual updates
Routine announcements, policy reminders, and FAQs can be drafted faster with AI – just make sure to check for accuracy.
🌍 Assisting with translation
AI tools can help translate content, but always have a native speaker review it to avoid errors in tone or meaning.
📊 Spotting trends in data
AI can analyse employee feedback or engagement data, helping you identify themes and patterns that may not be immediately obvious.
When AI should not be used
🗣 For messages that require empathy or sensitivity
People want to hear from real people, particularly during times of change or challenge. AI cannot read the room or convey emotion in the same way.
📣 For leadership communication
Messages that define culture and leadership should feel authentic. AI can help refine a message, but it should not write it.
❌ When you cannot verify the source
AI is not infallible. It can generate misleading or inaccurate content. If you are unsure where information came from, do not use it.
⚖️ Where legal, ethical, or reputational risks exist
AI does not understand corporate responsibility. Always apply human oversight before sharing anything sensitive.
🔐 For confidential or internal information
Public AI tools, such as ChatGPT, do not guarantee privacy. Avoid inputting non-public, sensitive, or proprietary information. Many organisations now have secure AI tools that comply with internal policies – always use those where available. If in doubt, check your organisation’s guidelines.
Five rules for using AI in Internal Communications
1️⃣ Be transparent – If AI has helped create content, make it clear.
2️⃣ Check the facts – AI is not always accurate. Verify all details before publishing.
3️⃣ Apply human judgement – If something does not feel right, change it. AI should not override common sense.
4️⃣ Stay true to your values – AI should never generate content that contradicts your organisation’s culture or ethics.
5️⃣ Use AI securely – Stick to approved, secure AI tools that align with data protection and company policies.
Final thought
Great internal communication is built on trust, clarity, and human connection. AI is a powerful tool, but it is just that – a tool. It can enhance what we do, but it should never replace real conversations and meaningful engagement.
PS: This article was initially written using AI – and then we enhanced it!



Comments