Charlotte Malmberg

Frameworks for simplicity beyond complex systems.

Writing With AI When Your Idea Is Original

I’ve been writing a personal finance book based on my own method: ClearFlow.

The system I created deliberately does not track transactions. That isn’t an omission. It’s the core design choice.

Most personal finance systems are ledger-based — track every transaction, categorize spending, reconcile monthly, and analyse what already happened. ClearFlow works differently. It is built on forward constraints: spending boundaries, daily limits, and save-to-spend buckets. Prospective, not retrospective.

That distinction isn’t a feature. It is the architecture.

When I tried to have AI help draft sections of the book, transaction tracking kept appearing in the text. Not once, not occasionally — repeatedly. Even after I removed it. Even after I clarified the structure in detail.

I assumed the model was misunderstanding me.

It wasn’t.

It was doing exactly what it is built to do. If most personal finance systems in its training data include transaction logs, then “personal finance system” and “track transactions” are strongly associated. Open a drafting space — or even ask for feedback — and the model drifts toward that dominant pattern.

Not wrong. Typical.

That was the moment something clicked.

AI generation pulls toward what is common. If you are building something deliberately different, that pull becomes visible very quickly.

I tried correcting it through prompts.

“Do not include transaction tracking.”
“This system does not rely on logs.”

It would hold for a section or two. Then, as we moved further through the book, the familiar pattern returned.

That’s when I realised I was working at the wrong level.

Prompting is conversation. You are trying to steer behaviour with words. But the system’s underlying objective hasn’t changed. It is still optimized toward what is statistically normal. Each time I removed the drift, I was correcting entropy rather than preventing it.

The problem wasn’t output quality. It was task design.

The shift came when I stopped asking the model to draft freely and created a Claude Skill that encoded the structural rules of ClearFlow.

Not stylistic guidance — structural constraints.

What the system includes.
What it excludes.
How decisions are framed.
What must never appear.

Once those boundaries were explicit, the behaviour changed. Suggestions to add transaction tracking stopped appearing.

More importantly, the model became useful in a different way. I could use it to check consistency across chapters, identify terminology drift, test whether examples aligned with stated principles across more than 30,000 words, and surface contradictions I had missed.

It stopped acting like a co-author and started acting like a validator.

The ideas remained mine. The architecture remained intact. The model enforced consistency against the structure I had defined.

That experience changed how I think about AI.

When something keeps reappearing in the output, the instinct is to improve the prompt. In my case, that wasn’t enough. The issue wasn’t phrasing. It was boundaries.

Once those existed, I stopped fighting the system. The drift reduced. The work became cleaner. The AI could finally do what it is genuinely good at: systematic comparison and structural checking at scale.

Prompting persuades. Boundaries constrain.

When you are building something deliberately different, constraint isn’t restrictive. It is what allows the difference to survive.

That experience also made something else obvious.

If I, working on a small, well-defined system, saw drift this quickly, the same dynamic will exist anywhere AI is drafting inside an organization.

Most operating models, risk frameworks, policies, and architecture documents follow established patterns. Those patterns dominate the training data. If AI is used to generate inside those domains without explicit structural constraints, it will tend to reinforce what is already common.

That may not be a problem when you are formalizing standard practice.

It becomes a problem when you are deliberately building something different.

In those cases, the absence of boundaries doesn’t just create noise. It slowly reshapes the system back toward the norm.

I learned that the hard way while writing a book.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *