Last updated: July 31, 2024
Copilot combines the power of large language models (LLMs) with Teams data to provide summaries and answer questions in real-time to help you stay productive in the workplace.
Chats and channels:
-
Copilot in chats and channels summarizes chat messages. Its inputs are messages from the conversation it is invoked from, and its outputs are a high-level summary and key takeaways. Users can also choose to summarize the last 1, 7, and 30 days of conversation. They can also enter a query through free text Q & A and Copilot will generate answers based on the chat history. Common use cases are summarizing the chat and getting answers to questions such as the decisions that were made or open items.
Compose:
-
Copilot in Compose helps users rewrite and adjust their messages so they can communicate effectively and collaborate efficiently with their coworkers. Copilot goes beyond spelling and grammar issues; it can help users nail the right tone and desired conciseness through message length. With the new Custom Tone feature, users can also type their own instructions to fine-tune the tone, translate the message, or add additional context.
Meetings and calls:
-
Copilot for meetings and calls will start working once a transcript is initiated. In meetings, Copilot can be started without a standard transcript if the organizer sets it to “only during the meeting." The user can enter a query (suggested or free text) at any time during or after the meeting or call, and Copilot will generate answers based on the transcript. Common uses are summarizing the meeting, generating action items, listing unresolved questions, and more.
Copilot in Teams is available in chats, channels, calls, and meetings. To learn more, see Welcome to Copilot in Microsoft Teams.
Copilot in Teams was evaluated through extensive manual and automatic testing on top of Microsoft internal usage, public data, and online measurements. The goal of the evaluations and monitoring is to keep high quality responses that are complete and legible, while maintaining reliability and short latency. Additional evaluations were performed over custom datasets for offensive and malicious prompts (user questions) and responses.
In addition to the online and offline evaluations and metrics, Copilot in Teams is constantly evaluated with user feedback (thumbs up and thumbs down feedback function), verbatim feedback, and through manual evaluations by our product teams.
Copilot in Teams does not share data with third parties.
Copilot in Teams does not use user data to train the model.
Teams Copilot for Meetings is limited to a single meeting that has a transcript, or an ongoing meeting where Copilot is running. Copilot will answer based on information available in the meeting transcript only. Very long meetings are currently not supported: answers may be limited in meetings over approximately 2 hours long.
In chat scenarios, Copilot is limited to how much data it can process and how far back it will go to fetch an answer. The farthest Copilot can process is 30 days from the most recent message sent. It may also be further limited by message retention policies. In meeting scenarios, long meetings (longer than 2hr) may be affected by longer latency compared to shorter ones.
Admins can choose which Copilots to enable via Microsoft 365 Admin Center and may enable or disable Copilot in Teams. Admins can decide if they want to allow Copilot for Teams Meetings “only during the meeting.” To learn more about managing Copilot in Teams meetings, please see Manage Copilot for Microsoft Teams meetings and events.
If your admin allows it, meeting organizers can choose to set Copilot to “during and after the meeting” to require a retained transcript to be turned on to use Copilot, or set it to “only during the meeting” to not keep any information after the meeting.
Teams Copilot will best respond when users do the following:
-
Limit questions to topics covered in the chat or meeting. Copilot will not answer unrelated questions.
-
Speak or chat in supported languages. Copilot will respond in supported languages, however English inputs drive better responses. For meetings, make sure the defined language of the transcript matches the spoken language.
-
Ensure that a recent and substantive volume of content is available. Copilot will provide error messages if these requirements are not met.
Copilot in Teams supports English, Spanish, Japanese, French, German, Portuguese, Italian, and Chinese Simplified. To learn more, see Supported languages for Microsoft Copilot.
Copilot and Microsoft 365 are built on Microsoft's comprehensive approach to security, compliance, and privacy.
For more information about privacy, see the following information:
-
If you’re using Microsoft 365 Copilot in your organization (with your work or school account), see Data, Privacy, and Security for Microsoft 365 Copilot.
-
If you're using Copilot in Microsoft 365 apps for home as part of Copilot Pro (with your personal Microsoft account), see Copilot in Microsoft 365 apps for home: your data and privacy.
Teams Generative AI features strive to provide accurate and informative responses, based on the data available. However, answers may not always be accurate as they are generated based on patterns and probabilities in language data. Use your own judgment and double check the facts before making decisions or taking action based on the responses. While these features have mitigations in place to avoid sharing unexpected offensive content in results and take steps to prevent displaying potentially harmful topics, you may still see unexpected results. We’re constantly working to improve our technology to proactively address issues in line with our responsible AI Principles.
LLMs naturally give slightly different responses for each query. This is similar to human behavior- if you ask someone the same question on different days, their answer will not be exactly the same. During a meeting and in an active chat, Copilot gives an answer based on the most recent content available. This content evolves over time- a topic that Copilot found relevant for an earlier summary might not be as relevant five minutes later.
By nature, Copilot summaries for meetings are concise and will not always cover 100% of the meeting content. However, you can always follow up with questions about specific details or topics to get the most out of your meetings. We are always looking to improve Copilot: giving your feedback (thumbs up or thumbs down) as you use Copilot is invaluable.
Currently, Copilot uses the first string in the attendee’s display name. If an attendee’s display name is reversed (LastName, First Name), Copilot may display the attendee’s name differently. We are rolling out changes to ensure Copilot will use the correct first name. Please share feedback to allow us to improve further.
If you find a generated response that is incorrect or if you encounter harmful or inappropriate content, please provide that feedback by clicking thumbs down in the summary and providing additional comments in the feedback form. This feedback helps us to improve and minimize this content in the future.