Fears that in the run-up to the European elections the technology in question could be used to deceive voters
US tech giant Meta will set up a new team to tackle AI-powered disinformation ahead of the June 6-9 European Parliament elections.
The rapid development and adoption of genetic artificial intelligence, which can generate text, images and video in seconds, has sparked fears that the technology could be used to cheat voters. Earlier this month, Meta, Microsoft, OpenAI and 17 other tech companies agreed to work together to prevent misleading AI content. In particular, they will target deepfakes, artificial videos that use misleading audio and video to impersonate celebrities or provide false information.
Social media platform TikTok announced in February that it would launch “polling centers” on its app, in local languages for each of the 27 EU member states, which would host valid information. Meta’s head of EU affairs, Marco Pancini, wrote in a post on the company’s blog that it will introduce “a dedicated EU business polling station” that will “identify potential threats and mitigate content in applications and technologies in real-time ».
“Since 2016, we’ve invested more than $20 billion in security and quadrupled the size of our global team working in this area to around 40,000 people,” he said. “This includes 15,000 moderators who review content on Facebook, Instagram and Threads in more than 70 languages – including all 24 official EU languages,” he added.
Experts from different groups across the company, including those working in engineering, data science and legal, are collaborating on this project. But Deepak Padmanabhan of Queen’s University Belfast, who has co-authored a study on elections and artificial intelligence, thinks the company’s strategy is problematic and lacking. One of the problems with Meta’s strategy, according to the expert, is how the company plans to deal with images created with the help of artificial intelligence.
For example, it will be difficult to authenticate a photograph depicting farmers clashing with police. “To prove that it is fake we have to be sure that the specific event did not happen and that the police officers depicted did not clash with the farmers. This may be infeasible for both technology and humans,” he noted. “How can any technology tell if it’s fake or real? So it’s not very clear how effective Meta’s genetic AI strategy could be – there are serious limitations,” he added.
Meta, which currently works with 26 independent data control organizations across the EU, said it will add three new partners in Bulgaria, France and Slovakia. Pancini emphasized that the company’s work was a result of collaboration and that further coordination is needed in the future. “Once AI-generated content appears online, we also work with other companies in our industry for common standards and guidelines,” he said. “This project is bigger than any one company and will require a huge effort from industry, government and civil society,” Pancini concluded.