Microsoft to discontinue AI facial recognition tool that identifies emotions

microsoft

Microsoft Corp. announced Tuesday that it plans to phase out its so-called artificial intelligence-based “emotion recognition” tool from its Azure Face facial recognition services, citing privacy concerns.

Experts have strongly criticized such “emotion recognition” tools because facial expressions and emotions differ according to country and race, making it inappropriate to equate external expressions of feelings with internal emotions.

The Redmond giant has reportedly been overhauling its emotion-recognition systems since last year to verify that they are grounded in science. To do this, it collaborated with internal and external researchers to understand the limitations and potential benefits of this technology and make the tradeoffs.

“Particularly in the case of emotion classification, these efforts raised important questions about privacy, the lack of consensus on a definition of ’emotions’ and the inability to generalize the association between facial expression and emotional state across use cases, regions and demographics. Sarah Bird, Principal Group Product Manager at Microsoft’s Azure AI unit, said in a blog post.

“API access to capabilities that predict sensitive attributes also opens up a wide variety of ways they can be exploited, including exposing people to stereotyping, discrimination, or unfair denial of services.”

To mitigate these risks, Microsoft has decided not to support a generic system in the Face API that could infer emotional states, gender, age, smile, facial hair, hair, and makeup.

As of June 21, 2022, detection of these attributes will no longer be available to new customers. They must request access to use facial recognition operations in Azure Face API, Computer Vision, and Video Indexer.

On the other hand, existing customers have until June 30, 2023 to stop using the emotional recognition tools before retiring. They also have one year to sign up and get approval for continued access to the facial recognition services based on the usage scenarios they provide.

However, face detection capabilities (including detecting blur, exposure, glasses, head position, landmarks, noise, occlusion, and face framing) remain widely available and require no application.

By introducing Limited Access, the company is adding an extra layer of control to the use and implementation of facial recognition to ensure use of these services aligns with “Microsoft’s Responsible AI Standard,” a 27-page Microsoft-produced document that sets the guidelines for AI systems to ensure they will not have a harmful impact on society.

The requirements include ensuring that systems provide “valid solutions to the problems they are designed to solve” and “comparable quality of service for identified demographics, including marginalized groups.”

For now, Microsoft has just asked its customers to “avoid situations that invade privacy or where technology may struggle,” such as identifying minors, but it has not explicitly banned such use.

The company has also imposed some restrictions on the Custom Neural Voice feature, requiring users to sign up and explain how they will use it.

Leave a Comment