Microsoft Reportedly Gets Rid of AI Ethics Team

The rise in the use of AI needs regulation, critics say.

Mar 16, 2023 - 10:30
 0  28
Microsoft Reportedly Gets Rid of AI Ethics Team

The rise in the use of AI needs regulation, critics say.

The team that oversaw Microsoft's  (MSFT) - Get Free Report AI products were shipped with protections to avoid any social concerns was part of its recent layoff of employees.

The AI team was part of the group of 10,000 employees that were let go recently as the tech company slashed its workforce amid a slowdown in advertising spending and fears of a recession, according to an article in Platformer.

DON'T MISS: Microsoft Takes on Google with Unique Tool

Risk increases when the OpenAI tech that is in Microsoft's products are used. The ethics and society team's job was to lower the amount of risk.

The team had created a "responsible innovation toolkit,” stating that "these technologies have potential to injure people, undermine our democracies, and even erode human rights — and they’re growing in complexity, power, and ubiquity."

'Safely and Responsibly'

The "toolkit" sought to predict any potential negative effects the AI could create for Microsoft's engineers.

Microsoft did not respond immediately to a request for comment. 

The company told news website Ars Technica, in a statement,  that it is "committed to developing AI products and experiences safely and responsibly, and does so by investing in people, processes, and partnerships that prioritize this.”

The company said ethics and society team's performance was "trailblazing.”

During the past six years the company prioritized increasing the number of employees in its Office of Responsible AI, which is still functioning. 

Microsoft’s has two other responsible AI working groups: the Aether Committee and Responsible AI Strategy in Engineering are still active.

OpenAI launched another version of ChatGPT with an advanced technology called GPT-4 that is being used for search engine Bing, according to a Reuters article.

Self-Regulation Is not Sufficient

Emily Bender, a University of Washington professor on computational linguistics and ethical issues in natural-language processing, said Microsoft's decision was "very telling that when push comes to shove, despite having attracted some very talented, thoughtful, proactive, researchers, the tech cos decide they're better off without ethics/responsible AI teams."

She also said, via a tweet, that "self-regulation was never going to be sufficient, but I believe that internal teams working in concert with external regulation could have been a really beneficial combination."

Researchers must decline to participate in hype when it comes to advances in AI and "advocating for regulation," Bender tweeted.

Last November, OpenAI launched ChatGPT, a conversational robot with which humans will be able to converse in a natural language. It has become the buzz tool in tech circles.

The Redmond, Washington-based company invested another $10 billion in OpenAI, the company that created ChatGPT. 

The investment valued OpenAI at around $29 billion. 

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow