Meta has reportedly split its Responsible ai (RAI) team as it dedicates more resources to generative artificial intelligence. ai-team”>Information broke the news today, citing an internal post he had seen.
According to the report, most RAI members will move to the company’s generative ai product team, while others will work on Meta’s ai infrastructure. The company periodically states that it wants to develop ai responsibly and even ai.meta.com/responsible-ai/”>has a page dedicated to the promise, where the company lists its “pillars of responsible ai,” including accountability, transparency, security, privacy, and more.
InformationThe report quotes Jon Carvill, who represents Meta, as saying that the company “will continue to prioritize and invest in safe and responsible ai development.” He added that although the company is splitting the team, those members “will continue to support relevant Meta-wide efforts on the development and responsible use of ai.”
Meta did not respond to a request for comment by press time.
The team already underwent a restructuring earlier this year, which ai-team-2023-10″>Business Insider wrote It included layoffs that left RAI as “a shell of a team.” That report went on to say that the RAI team, which had existed since 2019, had little autonomy and that its initiatives had to go through lengthy negotiations with stakeholders before they could be implemented.
RAI was created to identify issues with its ai training approaches, including whether the company’s models are trained with appropriately diverse information, with a view to preventing things like moderation issues on its platforms. Automated systems on Meta’s social platforms have led to issues such as a Facebook translation issue that led to a false arrest, WhatsApp’s ai sticker generation resulting in skewed images when given certain prompts, and Instagram’s algorithms. that help people find child sexual abuse materials.