Microsoft: Users’ Confidential Emails Accessed by Copilot’s Error

microsoft-creditionals

“An error caused unwanted access to users’ emails and panicked users globally.”

Microsoft has admitted that there was a mistake that led to its AI work assistant accidentally accessing and summarizing certain customers’ private emails.

Moreover, Microsoft has promoted Microsoft 365 Copilot Chat as a safe way for businesses and their employees to use its chatbot with generative AI.

However, it claimed that a recent bug permitted the tool to reveal to certain business customers information from drafts and sent email folders, including those that were designated as confidential.

Microsoft claims that it “did not provide anyone access to information they weren’t already authorised to see” and has sent an update to address the problem.

However, several experts cautioned that these kinds of errors were unavoidable due to the speed at which businesses compete to develop new AI features.

Copilot Chat can be used for emails and chat features in Microsoft products like Teams and Outlook to get answers to queries or summarize communications.

Spokesperson, Microsoft

“We found and fixed a bug that allowed Microsoft 365 Copilot Chat to retrieve material from user-authored, confidential emails that were saved in the user’s Draft and Sent Items in Outlook desktop.”

“This behavior did not match our planned Copilot experience, which is intended to keep protected content out of Copilot access, even while our access restrictions and data security regulations were in place.”

“For enterprise clients, a configuration update has been made available globally.”

 

Tech news site Bleeping Computer was the first to publish the error, claiming to have seen a service warning verifying the problem.

The statement “users’ email messages with a confidential label applied are being incorrectly processed by Microsoft 365 Copilot chat” was taken from a Microsoft notification.

Notice

Even with a sensitivity label and a data loss prevention policy set up to stop unauthorized data sharing, a work tab in Copilot Chat had summaries of emails saved in a user’s drafts and sent folders.

 

Image shows microsoft-says-copilot-was-summarizing-confidential-emails

 

According to reports, Microsoft discovered the mistake for the first time in January.

The bug’s warning was also posted on an NHS employee support dashboard in England, where a “code issue” was identified as the primary reason.

It has been impacted, according to a portion of the warning on the NHS IT support website.

However, it assured BBC News that patient information has not been revealed and that the creators of any draft or sent emails processed by Copilot Chat will keep the contents.

‘Data leakage will happen.’

Organizations with a Microsoft 365 subscription can use enterprise AI solutions like Microsoft 365 Copilot Chat, which frequently have stronger security measures and restrictions in place to stop the release of private company data.

However, the problem continues to draw attention to the dangers of using generative AI technologies in specific workplaces, according to some experts.

Nader Henein, Data Protection & AI Governance Analyst, Gartner

“Given how frequently new and novel AI capabilities are introduced, this kind of blunder is inevitable.

Businesses that use these AI solutions frequently lack the resources necessary to manage each new feature and safeguard themselves.

“Normally, organizations would just turn off the feature and wait for governance to catch up.”

“Unfortunately, that is almost impossible due to the pressure brought on by the deluge of unfounded AI hype.”

Alan Woodward, Cyber-Security Expert, Professor, University of Surrey

It demonstrated how crucial it is to have these technologies opt-in only and private by default.

“These technologies will undoubtedly have problems, especially since they are developing so quickly. As a result, even while data leaks may not be deliberate, they will nevertheless occur.”

About The Author

Suraj Koli is a content specialist in technical writing about cybersecurity & information security. He has written many amazing articles related to cybersecurity concepts, with the latest trends in cyber awareness and ethical hacking. Find out more about “Him.”

Read More:

INTERPOL Operation Red Card 2.0: 651 Arrests in Africa Cybercrime Crackdown

About Author

en_USEnglish