New features are coming to Microsoft Teams that improve audio quality to make phone calls, video calls and meetings better. Many of these new features are powered by artificial intelligence, to help participants remain focused, reduce distractions and improve productivity and satisfaction. Face to face communication remains the most effective way to engage for most people, but has steadily been declining in favour of more immediate calls and video meetings. Apart from those with dedicated, purpose-built rooms and the very best audio equipment, background noise, low quality devices and less-than-ideal spaces can have significant effects on the quality of audio during conversations.
Reported in an article at Wired titled “You’re not the only one being annoyed by bad audio at work”, office workers lose 29 minutes of productivity every week because of poor sound quality. Research conducted by Technics in 2021 considered the effects of poor audio quality on participants wellness while using audio and video conferencing systems in professional office environments. The results identified that changes in sound quality had statistically significant effects on a variety of measures of wellness, including stress, anxiety, frustration, confusion and energy. Using the power of artificial intelligence, Microsoft is taking steps to improve the audio quality within Microsoft Teams, with a series of new features coming soon.
Teams will soon be able to recognise unwanted noises an echo, where sounds coming from a speaker can be picked up by microphones positioned too close, resulting in an echo. Artificial intelligence algorithms can detect these echos and remove the automatically from the audio stream, resulting in clearer conversations and less effort in listening.
Rooms with poor acoustics can cause sounds to bounce between surfaces, resulting in high amounts of reverberation. This can be distracting and make it difficult to hear what another person is saying. This effect is often more pronounced the further a person is from a microphone, such as in large meeting rooms or speaking hands-free on a call.
Microsoft have developed a machine learning model to modify speech in real time, to eliminate the reverb and make it sound to participants as if every had a microphone close to their mouth.
It’s common when speaking to another person, to hear and speak at the same time, particularly to clarify or validation what the other person is saying. It can be very challenging to remove echos or suppress other unwanted noises in these situations during full-duplex, or multiple people speaking at the same time. Microsoft have developed a model with over 30,000 hours of speech samples to identify which sounds to retain and which to eliminate to create a more natural-sounding conversation, thanks to artificial intelligence.
For some time, there has been an option in Microsoft Teams to enable noise suppression, but users have needed to turn it on manually. Advancements in the technology and extensive testing have optimised the machine learning model such that it will soon be enabled by default. This technology is used to identify and then supress noises such as car alarms, dogs barking, or doors being slammed which may otherwise interrupt the natural flow of conversation.
These new features coming to Microsoft Teams are designed to improve audio quality, resulting in focussed and productive engagements free of distraction and stress caused by poor quality equipment, environmental issues, and unwanted interruptions. No action is required by users or IT teams to implement these new features. They are expected to roll out over the coming months, together with other video enhancements to improve call and meeting quality.
Listening and interacting between individuals can be difficult at the best of times, but hopefully these new AI-powered improvements will make life just a little easier when using Microsoft Teams.