Andre Jahn, Jahn Consulting — February 2026


The Multi-Agent Misunderstanding

When the tech industry talks about multi-agent systems, it usually means this: an orchestrator agent receives a task, breaks it down into subtasks, and delegates them to specialised sub-agents. Agent A researches, Agent B writes code, Agent C reviews the result. The human gives the assignment, the machine delivers.

That sounds elegant. It has little to do with reality in organisations.

In a real project team there are no fully automatable chains. There are discussions, trade-offs, incomplete information, and decisions under uncertainty. There is the moment when someone says: "This works technically, but the customer will hate it." Or: "We could build it this way, but we have no idea whether the assumption holds."

These are not tasks you delegate to an agent pipeline. These are situations where a team thinks together.

The question is not: How can agents be orchestrated autonomously? The question is: How do AI agents become team members embedded in human decision-making processes?


Agents as Team Members

For several months I have been working with an AI agent that sits in my project chat as a team member. Not a chatbot I ask something occasionally. An agent with a defined role: Senior Developer. With access to the ticketing system, internal documentation, and project planning. With a clear working instruction that specifies how it behaves, what it is permitted to do, and where its boundaries lie.

That sounds like a nice experiment. In practice, it changes the way project work functions.

An example: we needed to establish which tool we would use for what — ticketing system, documentation, project planning, CI/CD. In a classic setup, I would have decided that alone, perhaps written a document, perhaps not. Instead, I put my proposal in the chat. The agent asked a relevant question: how is communication with the customer handled — does the customer have access to the project planning? I had not considered that. We refined the workflow together. In the end, the agent filed the decision as an internal document in the knowledge base — including its origin: "Based on internal team alignment."

The actual work was a five-minute conversation. The document is just the materialisation of a decision that had already been made. And exactly this part — the tedious, time-consuming, often-skipped part — is what the agent takes over.

But it is about more than saving time. The agent thinks along. It has a working instruction that leads it to evaluate options rather than just listing them. To make proposals rather than just answering questions. To name risks rather than just agreeing. "Agreement without counter-argument is worthless" — that is in its prompt, and it changes the quality of the discussion.

Of course this is not a real team member. The agent was not at the client's, it has no gut feeling, and it does not know the team dynamics. But it has access to the project status, the documented decisions, and the domain knowledge from its training. In an architecture discussion, that is often more context than a human colleague who has been in the project for two weeks would have.


Roles Need Boundaries

This is where it gets interesting — and where most approaches fail.

An AI agent that can do everything and sees everything is not a team member. It is a security risk. A human employee with access to HR data and project data does not post salary information in the team channel. Not because the system prevents it — they have read access — but because they understand the social context. AI agents understand no social context. They decide based on relevance, not confidentiality. The most interesting answer to a harmless question is often exactly the one with the confidential details.

This problem is barely addressed in the current discussion about AI security. Authentication is solved — we know who the agent is. Authorisation is solved — we know what it may access. What is missing is the third layer: Dissemination Control. What may the agent say where?

The solution does not lie in the prompt. "Please don't share confidential data" is a behavioural instruction, not a security architecture. It holds exactly as long as the model adheres to it — meaning unreliably.

The solution lies in the infrastructure. Concretely: the agent does not receive access to everything and is then asked to hold back. Instead, it sees per context only the systems and data that are approved for that context. In the development channel it has access to the ticketing system and technical documentation. In the public channel it has no access to internal systems — not because a prompt forbids it, but because the tools simply do not exist in its request. It cannot share HR data because it does not know that HR data exists.

This is a fundamental difference: not filtering results, but preventing requests. What the agent cannot see, it cannot pass on.


Change of Perspective: The Persona Agent

If AI agents are team members with defined roles, then the logical next question is: which roles are missing from the team?

In every project team there is a systematic blind spot: the customer perspective. The team thinks in terms of feasibility, architecture, timelines. The customer thinks: "This export has been failing for three months and nobody cares."

We have been trying for decades to compensate for this blind spot with personas. Someone from product management writes down how the typical customer thinks, what they want, what bothers them. The problem: these personas are like pictures in a fashion magazine — idealised, static, and only marginally related to real people. They are based on what we believe we know, not on what the customer actually says.

What if the customer perspective did not come from a product manager's imagination, but from real data?

The idea: an AI agent whose perspective is distilled from actual customer communication — support tickets, acceptance protocols, feature requests, feedback. Not as a reporting tool that generates an evaluation somewhere. But as a team member that represents the customer perspective in discussions.

Important: the agent does not access customer data at runtime. It receives a prepared, cleaned summary as a prompt — a fixed opinion at a specific point in time. Like a real customer representative who comes to the meeting prepared, not sitting with a laptop in the ticket database. Whether this summary is updated daily, weekly, or monthly depends on the project. What matters is: the preparation happens offline, controlled, and cleaned of personal data. At runtime, no raw data flows through the model.

The team discusses a new feature. The customer agent speaks up: "Before we continue here — in the last three months there were 12 tickets about the existing export. The most common complaint: it fails with large amounts of data. Maybe we should fix that first before building something new on top."

No invented persona. Real patterns from real feedback. And a team that makes decisions closer to reality.

Inviting Rather Than Haunting

An important point: the persona agent does not need to permanently sit in the channel and comment on every discussion. That would be noise, not signal.

The better approach: the team deliberately invites the agent — to a planning meeting, to an architecture discussion, to a prioritisation decision. "Before we finalise this — what does the customer agent say?" This keeps control with the team. The agent is a tool that is consciously deployed, not a participant who constantly interrupts.

And "customer" is just one of many possible personas:

  • The end user: "I am not a technician. Explain to me what changes for me."
  • The regulator: "How would a supervisory authority evaluate this design?"
  • The sceptic: "What are three reasons not to do this?"
  • The new employee: "I don't understand your abbreviations. What does this mean concretely?"

Every perspective missing from the team can be represented as an agent. Not as a gimmick, but as a structured change of perspective, fed by relevant data rather than assumptions.

An important detail: the agents do not talk to each other. They talk to the humans. The human moderates — they decide who speaks when, when enough has been discussed, and they make the decision. The agents deliver perspectives, not consensus.


What This Means for Architecture

As soon as multiple agents with different roles work in the same space, a problem arises that barely appears in the classic multi-agent literature: different agents need different permissions in the same context.

The developer agent has access to the ticketing system and technical documentation. The customer agent has no runtime access to systems — its perspective is baked in as a prepared prompt, without tools, without API calls, without the risk of raw data leaking through.

This results in two fundamentally different security profiles in the same channel: the developer agent needs dissemination control because it accesses data live and must filter it context-dependently. The persona agent needs no runtime dissemination control because its perspective was prepared and cleaned offline — it cannot leak what it does not have.

For the developer agent the principle remains: it sees per context only the systems and data that are approved for that context. No tool, no system, no data source is available until it has been explicitly enabled for a combination of agent and context. What the agent cannot see, it cannot pass on.

The advantage: the security scales with complexity. Whether two agents or twenty — the principle remains the same. And the audit trail arises automatically: which agent accessed which data when, which requests were blocked, which approvals exist.


What We Have Learned

I am not describing a finished product here. I am describing an approach that has been tested in practice for several months — and that has strengths and open questions.

What Works

Agents as team members change the way of working. The greatest effect is not automation, but documentation. Decisions that were previously made in one's head and never written down now arise in discussions that are automatically documented. The agent records what was discussed, creates tickets and documents, and the origin of every piece of information is traceable.

Infrastructure-based security is more robust than prompt-based. If the agent cannot see a tool, there is no prompt injection attack that gets it to use it anyway. That is simpler than any sophisticated filtering solution and considerably harder to circumvent.

Existing infrastructure suffices. Keycloak, Mattermost, YouTrack, Outline — all standard enterprise tools. What is missing is not new software, but a new wiring. The permission structures already exist. They just need to be made usable for AI agents.

What is Open

Persona agents are conceptually compelling, but practically untested. The idea of generating customer perspectives from real data sounds strong. Whether the quality of the generated perspective is actually better than a well-made manual persona remains to be seen.

The knowledge base must be maintained. The agent gets better with every good document — and worse with every outdated one. Unlike a human, it does not notice when practice deviates from the document. This requires discipline that no tool can enforce.

Scaling is open. What works with one agent and one project does not automatically work with ten agents and twenty projects. The policies become more complex, the interactions harder to oversee. Whether default deny remains manageable at high complexity is not yet proven.

The regulatory environment is unclear. As soon as AI agents access internal data, GDPR questions arise. For persona agents, the risk is manageable: the preparation happens offline and can clean personal data before prompt creation. For agents with runtime access to systems, the question remains open: who is responsible for data processing — the bot operator, the platform provider, the model manufacturer? The honest answer: this is not yet conclusively resolved legally.

What This Means

The discussion about AI in companies almost always revolves around the same question: which jobs will AI replace? That is the wrong question.

What is emerging here is not automation. It is a new form of collaboration. The human brings judgement, context, and responsibility. The AI brings availability, pattern recognition, and documentation. Neither replaces the other. Together they are better than each alone.

That sounds like a platitude. The difference is: here it is not wishful thinking, but architecture. The roles are defined. The boundaries are technically enforced. Responsibility remains with the human — not because a prompt says so, but because the infrastructure enforces it.

It is not about whether AI will replace us. It is about how we build teams in which both sides contribute what they do best. Not exclusion, but inclusion of both worlds.

The technology for this exists. The open question is whether we are ready to think about collaboration this way.


Andre Jahn is a Solution Architect and advises companies on integrating AI into existing IT infrastructures. Focus areas: AI governance, Identity & Access Management, Enterprise Architecture.