Microsoft Copilot is an AI assistant that is integrated into Microsoft 365 apps. It has access to all the data and documents a user has worked on in Microsoft 365, which raises concerns for information security teams. On average, 10% of a company’s M365 data is accessible to all employees, and Copilot can also generate new sensitive data that needs to be protected. The use cases for Copilot are vast, including drafting proposals, summarizing meetings, triaging emails, and analyzing data in Excel.
Copilot prompts are processed by gathering the user’s business context, sending it to the LLM (like GPT4) to generate a response, and then performing post-processing responsible AI checks. In terms of security, Copilot only uses data from the current user’s M365 tenant and does not use business data to train the foundational LLMs. However, it surfaces all organizational data to which individual users have view permissions and does not inherit the MPIP labels of the files it sources responses from. Copilot’s responses are not guaranteed to be 100% factual or safe, so human review is necessary.
To ensure a safe Copilot rollout, organizations need to address data security concerns. Many organizations struggle with least privilege access in Microsoft 365, and permissions are often in the hands of end users rather than IT or security teams. Labeling data for protection can be difficult, especially if it relies on humans, and the efficacy of label-based protection may degrade with the increased use of AI-generated data. It is important for organizations to have a strong data security posture before implementing Copilot, and solutions like Varonis for Microsoft 365 can help automate security controls and mitigate risks.
Source link
