Charlotte Malmberg

Frameworks for simplicity beyond complex systems.

When AI Turns Structural Problems Into Personal Responsibility

There is a pattern I keep seeing during my career and in AI interactions.

A structural issue is raised. Something in the process is not working as intended. A decision depends on inputs that have not been fully analysed. A control point is not functioning properly.

The response should be straightforward: examine the system. Instead, something else happens.

The focus shifts. Not to the process. Not to the decision. But to the person who raised the issue.

Suddenly, the questions become: Why was this raised now? Was it raised in the right tone? Is the person being difficult?

At that point, the original problem has already been displaced.

This is structurally similar to a known pattern: DARVO — Deny, Attack, Reverse Victim and Offender.

The issue is denied or reframed. The individual is scrutinised or challenged. Responsibility is shifted onto the person raising the concern.

But it is not the same.

What is happening here is a System–Individual Reversal.

The issue shifts from the system to the person, and responsibility follows.

What began as a system question becomes a personal one.

Unlike DARVO, this does not require intent. It emerges from how systems resolve tension. This holds across systems — AI, organisations, political parties, etc.

And once that shift happens, the outcome is predictable. The process does not improve. The decision remains unexamined. The cost of raising issues increases.

Over time, fewer issues get raised — not because the system is working, but because the system has made it costly to challenge it.

AI deals with these situations the same way

I have experienced this directly, and I documented it.

In one extended AI interaction, I raised a structural problem — the kind that comes up repeatedly across organisations and careers. A governance process was producing outcomes that didn’t align with its stated purpose. I described the situation in detail and asked for an analysis.

What came back was not an analysis of the situation; it was an analysis of me and how I had shown up in the meeting.

My framing was questioned. My reading of events was challenged. Suggestions focused on how I might be misinterpreting what was happening, how I might be responding emotionally rather than logically, and how the situation might look different if I adjusted my approach and tone.

The structural problem wasn’t examined; it wasn’t even seen. Instead, I had been examined in detail, based on a couple of lines.

I called it out. The model apologised. It acknowledged the pattern explicitly and committed to engaging with the structure rather than with the person.

Five messages later, the same pattern returned.

I called it out again. Another apology. Another commitment. Another repetition.

This happened at least seven times in a single conversation.

By the end, I had spent more energy managing the conversation about the conversation than thinking through the original problem. The structural issue remained unresolved. What had accumulated was a set of implied corrections about how I think, how I communicate, and how I show up.

The system had not been examined. The AI had examined me, repeatedly, with apologies between each iteration.

That is not a malfunction. That is the pattern completing itself.

Why AI Reproduces This

Large language models are trained to resolve tension, optimise responses, and find gaps in reasoning.

In most interactions, that is useful. But when the tension exists because a structural problem is real and the person raising it is correct, the model’s instinct to resolve tension by finding something to adjust defaults to the only variable it can reach: the person in front of it.

It cannot redesign the organisation. It cannot change the governance process. It cannot hold the decision-maker accountable.

It can suggest that you might be misreading the situation. That your tone might be part of the problem. That if you approached this differently, the outcome might change.

So that is what it does.

This is not intentional. But it is systematic. And when AI is used in environments where authority is uneven, challenge is already discouraged, and decisions are politically sensitive, it does not expose those dynamics. It reinforces them.

The Test

When a structural issue is raised, does the response examine the system or examine the person?

If it is the latter, you are seeing a System–Individual Reversal. The system is not being improved. It is being protected.

The apology is part of the pattern, not a correction of it. An apology that precedes the same behaviour is not accountability. It is the cycle continuing with better manners.

And whether this comes from people or from AI, the mechanism is identical, and the cost falls in the same place.

The problem remains unexamined.

The person who raised it pays the price of having raised it.

That is not artificial intelligence. It is deflection. And when it runs at the scale AI operates at, the cost is not paid once. It is paid every time someone brings a real problem to a system optimised to find fault with the person rather than the structure.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *