Research
Security operationsUpdated May 1, 20267 min

Why EDR Misses Agent Intent

Endpoint security sees process behavior. Agent security needs the missing layer: prompt, model, tool, repository, user, policy, and verdict in one chain.

Thesis

EDR remains necessary, but it was not designed to explain why an AI agent chose a tool action or whether that action matched organizational policy.

EDRinvestigationsaudit

Technical readout

Correlation layer EDR does not own

Endpoint telemetry remains important, but buyers should ask how the agent layer is joined back to process behavior.

tool call ID

Which agent decision caused the process, file, or network event?

Attach runtime event IDs to commands, file reads, MCP calls, and session timelines before the OS event is flattened.

prompt context

Was this user intent, model initiative, retrieved content, or an injected instruction?

Preserve prompt and source-context summaries with redaction so investigators can classify causality.

policy verdict

Was the action expected under the org policy at the time it occurred?

Store verdict, rule, pack, enforcement mode, and policy version next to the endpoint action.

normal baseline

Do passed actions show what normal agent behavior looks like for this user and repo?

Log pass volume with the same correlation keys as warn and block events so investigations have contrast.

Endpoint telemetry starts too low

An EDR event can tell an analyst that a process opened a file, spawned a shell, reached the network, or modified a repository. It usually cannot tell them which prompt led to the action, which model suggested it, which agent tool executed it, or which policy should have governed it.

That missing context matters because agent behavior can look like normal developer behavior at the operating-system layer. The same shell command can be routine maintenance, model-driven drift, or a prompt-injection outcome depending on the session context.

Intent is not a replacement for behavior

AgentKeeper should not replace EDR. The layers answer different questions. EDR asks whether endpoint behavior is malicious or suspicious. Agent runtime security asks whether an agent action was allowed for this user, workstation, repository, data surface, and policy.

Security teams need both when AI agents become part of daily development. Endpoint alerts without agent context leave too much reconstruction work for analysts.

Passed events are investigation evidence

Only logging blocked events creates a distorted picture. Analysts need to see the normal passed actions around a block: the reads that preceded it, the prompt that shaped it, the MCP calls that were allowed, and the policy state at the time.

That surrounding record helps separate expected automation from actual abuse. It also gives policy owners the data they need to reduce noisy warnings without weakening important blocks.

The investigation chain has to be complete

A useful AI-agent investigation starts with the verdict, then opens the evidence: prompt context, tool arguments, output summary, policy match, affected repository, identity, workstation, model, and time.

That chain turns agent adoption from an exception process into a governed operating model. Teams can approve useful tools without accepting blind spots.

Technical model

Intent plus behavior

EDR explains endpoint behavior. Agent runtime telemetry explains the policy and session context that led to it.

Signals in the model

Agent session

Prompt, model, tool plan, source material, identity

Runtime event

Normalized action, policy match, verdict, redaction

Endpoint behavior

Process, file, network, repository, child command

Investigation

Timeline, evidence, owner, blast radius, response

The two layers answer different questions and become stronger together.