Why it should change and what a 2026 business-writing course might look like
- Michael Walker and AI

- Mar 7
- 11 min read
The case for moving from writing skills to orchestration skills.

If you look at your training budget for 2026, there is a reasonable chance you still have a line item for Business Writing Skills. In most organisations, that workshop covers the basics: how to structure an email, how to use the active voice, how to avoid corporate jargon.
Here is a possibility worth sitting with: that workshop may be obsolete, not because writing no longer matters, but because the bottleneck has moved.
By 2026, most knowledge workers have access to tools that can produce grammatically correct, well-structured text in seconds. The technical barrier to generating a decent first draft has dropped considerably. What seems not to have dropped is the cost of badly directed text: communications that miss the point, reports that include fabricated data, client-facing documents that strike the wrong tone.
It is worth noting that a 2024 survey by the Chartered Institute of Personnel and Development (CIPD) found that communication skills remained the single most cited capability gap across UK organisations, even as AI adoption accelerated. Make of that what you will.
The question this post is really asking is whether the problem has shifted: from people not being able to write, to people not knowing how to direct a machine that can.
From Author to Managing Editor: A Useful Way to Think About This
One way to think about this shift is to borrow a metaphor from publishing. The author sits alone and produces every word from scratch. The Managing Editor does something more complex and, in many ways, more valuable: they commission work from others, set the brief, fact-check the output, calibrate the tone for the audience, and make the final call on what goes out under the publication's name.
It is at least plausible that in the 2026 workplace, that is the role more and more knowledge workers are being nudged towards. The AI drafts. The employee edits, directs, and takes responsibility.
If that framing holds, then the Managing Editor role would require four distinct skills that look quite different from the ability to write well:
• Commissioning: defining what needs to be created, for whom, and to what standard, before a single word is generated.
• Fact-checking: actively verifying the output against primary sources, not simply reading it and assuming it is correct.
• Tonal calibration: reviewing whether the language is right for the specific audience, context, and relationship – something AI cannot reliably judge on its own.
• Audience judgement: understanding what a board, a client, a frontline team, or a regulator actually needs to hear, and encoding that understanding into the brief.
None of these skills are primarily about sentence construction. They are about strategic intent, critical thinking, and human judgement. They are also, arguably, much harder to automate than drafting – which is perhaps why they matter more now, not less.
The four Managing Editor skills in practice
The table below shows what each of these skills looked like before widespread AI adoption, and what they look like now.
Skill | What it looked like before AI | What it looks like now |
Commissioning | Writing the first draft yourself, or briefing a copywriter with a rough outline. | Writing a precise brief for an AI agent: defining scope, tone, audience, and constraints before a single word is generated. |
Fact-checking | Checking sources after reading a draft written by a colleague. | Actively hunting for hallucinations: fabricated statistics, invented quotes, false consensus, and anachronistic references baked into AI-generated text. |
Tonal calibration | Adjusting your own writing voice for different audiences, usually instinctively. | Specifying tonal parameters explicitly in a prompt, then reviewing the output for cultural blind spots, over-formality, or unintended implications the AI cannot detect on its own. |
Audience judgement | Instinctively knowing what a board, a client, or a frontline team needs to hear. | Translating that knowledge into explicit constraints – seniority, context, prior knowledge, sensitivities – so the AI can serve the right audience rather than a generic one. |
What a 2026 Business-Writing Course Might Look Like
If training is going to shift in this direction, then the classroom probably needs to shift with it. A communication workshop built around orchestration skills looks less like a lecture and more like a flight simulator: the goal is not to teach theory, but to build reflexes under realistic conditions.
Here is one version of what that might look like. Imagine a group of Project Managers in a workshop session. Instead of practising subject lines, they are given a live scenario:
• The input: A messy 45-minute transcript of a chaotic client brainstorming session, plus a spreadsheet of raw budget figures.
• The task: Using the company's AI agent, produce a formal Project Charter. They are not typing the sentences. They are feeding the agent specific, structured instructions.
• The twist: The workshop leader introduces a deliberate error. The AI has hallucinated a project deadline that does not appear anywhere in the transcript. It sounds plausible. The date is consistent with the project timeline. The participants are graded not on how quickly they generated the document, but on whether they caught the error.
This is one model for what 2026 communication training could look like. The skill being tested is not writing. It is critical discernment: the ability to interrogate an AI output against the source material and notice what does not add up.
What makes this genuinely difficult is not that the error is obvious. In most cases it is not. AI-generated text tends to be fluent, confident, and structurally sound. The hallucinated deadline in the example above would likely pass a casual read every time. Catching it requires active verification, not passive reading – and that is a habit most people have not yet been trained to develop.
The KERNEL Framework: A Practical System for Orchestration
If the shift from drafting to orchestrating is real, then people need a repeatable method to make it work in practice – not just a conceptual framework but something they can actually use under time pressure. The KERNEL framework useful for this: six principles for producing more reliable, purposeful AI-assisted communication.
KERNEL is not a checklist to run through every time you open a chat window. The intention is that it becomes a set of habits that, once embedded, improve the quality of most prompts without requiring much conscious effort.
Letter & Principle | What it means | Vague prompt | Orchestrated prompt (KERNEL) |
K – Keep it simple | Give the AI one clear goal. A direct command like 'Write a technical tutorial on Redis caching' will outperform a 500-word scene-setting preamble every single time. | Write something about our new onboarding process for new starters. | [Context]: Our current onboarding takes 6 weeks and covers IT setup, compliance, and role-specific training. [Task]: Write an overview of the new 4-week onboarding plan. [Constraints]: Max 200 words. One goal per paragraph. [Format]: Plain prose for the intranet homepage. |
E – Easy to verify | Every prompt needs a measurable success criterion. 'Make it engaging' is unverifiable. '3 specific client case studies from the financial sector' is not. If you can't check whether the AI has done the job, you can't rely on the output. | Write a blog post about our sustainability work. Make it feel authentic. | [Context]: Our 2024 Sustainability Report (attached). [Task]: Write a 400-word blog post highlighting 3 specific, measurable achievements from the report. [Constraints]: Each achievement must include a number or percentage. No vague claims. [Format]: Subheadings for each achievement, conclusion paragraph. |
R – Reproducible results | Avoid vague temporal language like 'current best practices' or 'latest thinking'. These shift every time you run the prompt. Use specific versions, named frameworks, or exact references to get consistent, reliable outputs. | Update our data security policy to reflect current regulations. | [Context]: Our existing Data Security Policy v2.1 (attached). [Task]: Revise section 4 to align with our DPR Article 32 (as of 2024). [Constraints]: Do not change sections 1-3. Preserve original heading structure. [Format]: Tracked changes style, with a summary table of amendments. |
N – Narrow scope | One prompt, one goal. If you need a project proposal that includes a budget breakdown, a risk register, and a stakeholder comms plan, run three separate prompts and review each output before moving to the next. | Write a full project proposal for the new CRM rollout including budget, risks, and a comms plan for all stakeholders. | Run as three sequential prompts: 1. [Task]: Draft the executive summary and objectives section only. 2. [Task]: Build a risk register based on the objectives output. 3. [Task]: Draft a stakeholder comms plan based on the risk register output. |
E – Explicit constraints | Tell the AI what not to do. Negative constraints are some of the most powerful tools in a prompt. They cut the noise before it appears, rather than asking you to edit it out afterwards. | Write a response to the client complaint about the delayed delivery. | [Context]: Client complaint email (attached). Delivery was 8 days late due to a supplier issue. [Task]: Draft a professional response that acknowledges the issue and offers a resolution. [Constraints]: Do not admit legal liability. Do not offer a refund unless explicitly instructed. No phrases like 'I understand your frustration'. Max 150 words. [Format]: Formal business email. |
L – Logical structure | Every prompt should follow a four-part sequence: Context (what the AI needs to know), Task (what you want it to do), Constraints (what limits apply), and Format (what the output should look like). This structure alone will improve your results immediately. | Can you write me a quarterly update for the board about the marketing team's performance? | [Context]: Q3 Marketing Report (attached). Revenue target was R2.4m; actual was R2.1m. [Task]: Write a board-level quarterly performance summary. [Constraints]: Acknowledge the shortfall factually. Do not speculate on causes beyond the data. No jargon. Max 300 words. [Format]: Three sections – Highlights, Challenges, Next Steps. Suitable for a board pack. |
The KERNEL framework in action: a before and after
The table below shows what the difference looks like in practice for a common workplace task.
The old way (vague prompting) | The KERNEL way (orchestration) |
"Write a summary of the Q3 project update for the team and make it sound professional." | [Context]: Based on the attached Q3_Project_Alpha_Log PDF. [Task]: Summarise 3 key milestones achieved. [Constraints]: Max 150 words. Do not mention the budget overrun yet. No corporate jargon. [Format]: Bullet points for Slack. |
Result: A generic, wordy paragraph that likely misses the key data points. | Result: A precision-engineered update that is instantly ready for the team channel, with the right information and nothing that should not be there. |
The Hallucination Problem: Why Passive Trust in AI Output Is Probably Unwise
One of the more under-appreciated risks with AI-assisted communication may not be that employees produce poor writing. It may be that they publish confident, well-formatted documents containing errors they never spotted.
AI models do not know when they are wrong. They generate plausible-sounding text based on patterns. When they lack specific information, they tend to fill the gap with something that fits the context rather than flagging the uncertainty. The result is a category of errors that are particularly tricky because they are hard to spot on a casual read.
The five most common patterns seen in AI-generated business documents are set out below, along with some practical suggestions for catching them.
Error type | What to look for | How to catch it |
Fabricated statistics | Numbers that sound plausible but have no traceable source. AI is particularly prone to generating percentages and survey figures. | Ask the AI to cite the source for every statistic. If it cannot, remove the figure. |
Invented quotes | Attributed quotes that were never said, or paraphrases presented as direct speech. | Never publish a named quote from an AI draft without verifying it against the original source. |
False consensus language | Phrases like 'experts agree' or 'research consistently shows' with no specifics. These create an illusion of authority where none exists. | Flag any claim that uses collective language without naming the specific study, person, or organisation. |
Anachronistic references | References to policies, regulations, or events that are out of date, particularly in fast-moving fields like employment law or data privacy. | Always cross-reference regulatory or legal references against a current official source. |
Cultural blind spots | Language that is appropriate in one cultural context but carries different connotations in another, particularly in global communications. | Have someone from the target audience review any externally-facing document before it is sent. |
Building the habit of checking for these errors is probably not a one-off training exercise. It is more likely to stick when people practise it repeatedly with real documents in realistic conditions. The workshop format described above – where participants actively hunt for a planted error – tends to be more effective than theory alone, though like any training approach, results will vary.
Who Probably Needs This – and Who Probably Does Not
It is probably a mistake to assume this applies equally to everyone. The Managing Editor skills described above are likely most pressing for knowledge workers who regularly produce communications that carry real organisational weight: managers, analysts, HR professionals, client-facing teams, and anyone who signs off documents that go outside the organisation.
For those roles, the case for investment seems reasonably clear. The risk of an AI-generated client proposal containing a fabricated statistic, or a board paper attributing a quote that was never said, is not purely hypothetical. There are already enough documented cases of AI-generated errors reaching external audiences to suggest this is worth taking seriously.
For frontline roles with less document-heavy responsibilities, the training need is probably narrower. A warehouse operative or a retail colleague is unlikely to need the KERNEL framework. They may, however, benefit from some basic orientation on what AI-generated content looks like and why passive trust in it is not always warranted.
The practical implication – if this analysis holds – is that organisations would do well to differentiate their training investment by role and risk level, rather than applying a single programme uniformly.
The Change Management Reality
It is worth being honest about the friction this kind of shift involves. Getting people to change communication habits is already one of the harder things training teams do. Adding a new mental model on top of that is not straightforward.
The resistance pattern we tend to see most often is not hostility to AI. Most people are broadly comfortable using AI tools for routine tasks. The friction tends to emerge around the additional cognitive effort that structured prompting requires. Writing a KERNEL-formatted prompt takes longer than typing a quick question into a chat window, at least at first, and people will notice that.
One approach that seems to help is habit stacking: connecting the new behaviour to something that already exists. If your organisation already uses project briefs, meeting agendas, or document templates, KERNEL can be introduced as an extension of those existing habits rather than something entirely new. The four-part structure (Context, Task, Constraints, Format) maps reasonably naturally onto a project brief, for instance.
It also helps to make the quality difference visible early. In workshop settings, when people see a vague prompt output and a KERNEL-structured output side by side, the case for the extra effort tends to become fairly self-evident. The before and after table earlier in this article is intended to create that same moment on the page.
A Note on Governance
Any organisation deploying AI tools for communication would probably benefit from having, or developing, some clarity on where human judgement is required and where AI assistance is appropriate. This is not purely a quality control question. It touches on legal liability, data privacy, and reputational exposure.
A workable AI usage policy would likely need to address, at a minimum: which document types require human review before publication, which data sources can and cannot be fed into an AI prompt, and what disclosure obligations apply to externally-facing AI-assisted content. None of these are simple questions, but leaving them unaddressed is probably riskier than engaging with them imperfectly.
Communication training and governance policy probably work better when developed in parallel rather than sequentially. Teaching orchestration skills without a governance framework is a little like training drivers without road rules. The training equips people to do things faster; the policy shapes what they do with that speed.
What This Might Mean for Your Training Budget
Whether the shift described in this article is already underway in your organisation or still some way off will depend on many factors: the pace of AI adoption in your sector, the nature of the roles involved, and how seriously leadership is taking the governance questions.
But if the direction of travel is broadly right – if the bottleneck really is shifting from producing words to directing machines that produce words – then the implication for training is fairly significant. Investment in the Managing Editor skills (commissioning, fact-checking, tonal calibration, audience judgement) may well deliver more value than continued investment in the craft of writing itself. A practical framework for structured prompting could become a core competency rather than a nice-to-have. And a culture of critical discernment – people who read AI outputs as editors rather than passive consumers – may start to look like a genuine organisational asset.
None of that is certain. But it seems worth thinking about before the 2026 training budget is finalised.





Comments