At the heart of the major AI news: the partnership between the French start-up Mistral AI and Microsoft, a case of industrial espionage at Google, and a study revealing gender bias in AI language models.
Mistral partners with Microsoft
The French nugget Mistral AI announced in early February 2024, 18 months after its creation, an unexpected partnership with the American giant Microsoft, which will pay it 15 million euros in return. €15 million is therefore the price of the loss of independence of one of the most beautiful promises of French innovation.
A controversial choice, as many on the old continent believed in the emergence of a solid competitor to ChatGPT and other generative AI from GAFAM.
Had Mistral reached this complex point in its evolution where finding a solid partner to exist and deploy itself was becoming difficult if not impossible in France and Europe? If the operation is clearly a winner for Microsoft, the main investor in OpenAi, which will be able to expand its offering, it worries Mistral, some of whom fear that it will lose its soul, namely its independence and its culture.
Spying at Google
On March 6, a Chinese engineer hired in 2019 by Google, Linwei Ding, was arrested at his home by the FBI. He is suspected of having, between May 2022 and May 2023, "stolen" no less than 500 confidential documents relating to Google's research on AI. In June 2022, he was approached by a Chinese start-up that offered him a position as IT manager in exchange for a few bits of information on AI...
The Chinese engineer who was too greedy or too patriotic is targeted by 4 charges for each of which he risks 10 years in prison and a fine of 250,000 $. We don't joke around in the US with industrial espionage!
AI suspected of being sexist
UNESCO, the United Nations Educational, Scientific and Cultural Organization, has just issued a warning in a press release on gender biases observed in most major language models used by generative artificial intelligence, notably in OpenAI's ChatGPT and Meta's LLaMa.
Titled “Bias Against Women and Girls in Large Language Models”, this study, unveiled on the eve of March 8, International Women's Day, highlights the reproduction of human stereotypes in these languages. According to UNESCO, "These language models have a worrying propensity to produce gender stereotypes, racial clichés and homophobic content. Women are described as domestic workers up to four times more often than men. They are frequently associated with the words “home”, “family” and “children”, while for men the words “company”, “executive”, “salary” and “career” are preferred.
Hence a call from the Organization to better respect its Recommendation on AI Ethics, the first and only global normative framework in this field published in November 2021.