AI Edition - It's AI Time - 36 [20-05-2024]

AI Edition - It's AI Time - 36 


OpenAI dissolves safety team

According to sources, the Superalignment team, co-led by Leike, was denied resources, preventing the team from being able to adequately control its superintelligent AI systems. This pushed Leike (and others) to “breaking point”, with Leike expressing disappointment that “safety culture and processes” are taking “a backseat to shiny products”. In response, OpenAI CEO, Sam Altman, and co-founder, Greg Brockman, have admitted that OpenAI has "a lot more to do" and is committed to building a “harmony of safety and capabilities.”

Microsoft gets $2B EU fine?

The EU originally asked Microsoft for information relating to electoral risks posed by its search engine Bing’s AI features (its AI chatbot, Copilot, and image generator), in March. After Microsoft failed to respond, the EU set a new deadline of May 27 and will then assess the information provided to establish if further scrutiny is needed. If Microsoft fails to meet this new deadline or provides the EU with incomplete or misleading information, it could be fined up to 6% of its global annual revenue (last reported as $211B).

Slack’s secret AI scandal

A post on X revealed that Slack is using its customer data to train “global models,” which it uses to power channel and emoji recommendations and search results. According to its (confusing) privacy principles (quietly updated in September), users will contribute their data for training AI models, by default, but can opt out of the program by email. In response to the backlash, Slack has revised its policy wording and reassured users that its machine-learning models are not trained “to learn, memorize or reproduce customer data.”


Comments

Popular posts from this blog

AI Edition - It's AI Time - 231 [02-12-2024]

AI Edition - It's AI Time - 16 [30-04-2024]

AI Edition - It's AI Time - 140 [02-09-2024]