Information Insights

Useful information is everywhere.

It’s just waiting to be utilised!

Matt O’Mara

Information Insights

Understanding Copilot Agent Sprawl - Risks and Mitigations

Copilot agents are integrations or ‘assistants’ embedded into Microsoft 365 applications (like Word, Excel, Outlook, Teams) that use AI to help users work more efficiently.

Agent sprawl occurs when multiple instances or types of these agents proliferate without central control or oversight.

This can lead to inconsistent behavior, increased security risk, poor user experiences, and governance issues and risks.

Risks Associated with Agent Sprawl

There are a number of risks associated with agent sprawl. These include:

  • Data exposure

    If agents have access to sensitive or unclassified data, there's a risk of inappropriate data use or leakage.

  • Shadow IT

    End users or departments may implement their own Copilot configurations, bypassing IT governance.

  • Overlapping functionality

    Multiple agents may provide redundant or conflicting capabilities, confusing users.

  • Cost creep

    Licensing and compute costs can increase without clear ROI or oversight. It is important that you understand the licensing and cost implications.

Mitigation Strategies

Establish Governance Early

  • Create a Copilot governance framework with cross-functional input (IT, security, compliance, legal, privacy, information management).

  • Define who can enable, configure, and monitor Copilot agents.

  • Set approval workflows for new agent deployments or integrations.

Inventory and Assess Current Usage

  • Use Microsoft tools like Purview and Defender for Cloud Apps to map where and how Copilot is being used.

  • Track permissions, data flows, and user activity tied to Copilot features.

Align with Data Classification and Protection

  • Ensure Copilot respects your data classification policies—sensitive or regulated data should be excluded unless protections are in place.

  • Leverage Microsoft Information Protection (MIP) and sensitivity labels to guide agent behavior.

Set Guardrails and Policies

  • Define acceptable use policies for AI-driven assistants.

  • Apply conditional access and least privilege principles.

  • Limit access to certain Copilot capabilities by role, group, or department.

Enable Transparency and Monitoring

  • Regularly review Copilot logs and audit trails to ensure compliance and spot anomalies.

  • Educate users about what Copilot does—and what it doesn’t do—to avoid overreliance or misuse.

SharePoint Agent Management Site Level

  • Site owners and Site Members can create, edit and delete Agents (if you don’t want to allow this you will need to do some things with permissions which isn’t fun)

  • You can’t edit the ready-made-agent that comes with every SharePoint site

  • Depending on where you create an Agent it will reside in different places:

    • Document Library: In the Document Library where its created from

    • Site home page or Agent Chat Pane: In the Copilots Folder under site assets

  • There are a number of controls but expect this to improve and become less disjointed in the future

Final Recommendations

  • Treat Copilot like any other enterprise platform component, with associated security, lifecycle, and governance responsibilities.

  • Partner with business stakeholders to drive value-focused use cases while managing risk.

  • Plan for ongoing updates - Copilot will evolve rapidly, and policies must adapt accordingly.

[See also: SharePoint agents – Microsoft Adoption]

Welcome to Information Insights - a space where I explore and discuss topical challenges and developments in Information Management.