Useful information
Prime News delivers timely, accurate news and insights on global events, politics, business, and technology
Useful information
Prime News delivers timely, accurate news and insights on global events, politics, business, and technology
The Meta Security Advisory Council has Written the company a letter About his concerns with his recent policy changes, including his decision to suspend his fact verification program. In it, the Council said that the change of goal policy "It runs the risk of prioritizing political ideologies on global security imperatives." It highlights how the goal position as one of the most influential companies in the world gives it the power to influence not only online behavior, but also social norms. The company Risga "Normalize harmful behaviors and undermine years of social progress … by marking protections for protected communities," The letter says.
Facebook Help center describes the Meta Security Advisory Council as a group of "Independent online security organizations and experts" from several countries. The company formed in 2009 and consults with its members on issues that revolve around public security.
The Meta CEO, Mark Zuckerberg, announced the massive change in the company’s approach for moderation and speech earlier this year. In addition to revealing that goal is finishing its verification program of third -party facts and implementing community -style notes, something, Lina Yaccarino de X had applauded, she also said that the company is killing "A lot of restrictions on issues such as immigration and gender that are out of contact with conventional discourse." Shortly after his ad, goal changed his Hate behavior policy to "Allow accusations of mental illness or abnormality when it is based on gender or sexual orientation." It also eliminated a policy that prohibits users to refer to women as domestic objects or properties and call transgender or non -binary people as "he."
The Council says that target "Continuous efforts to address the most atrocious and illegal damage" on his platforms, but he also emphasized that he is directed "Continuous hate against individuals or communities" It must continue to be a goal priority, since it has domain effects that go beyond its applications and websites. And since marginalized groups, such as women, LGBTQIA+ and immigrants communities, are disproportionately directed online, finish policy changes could take away what made them feel safe and included in the company’s platforms.
Returning to the goal of putting an end to its fact verification program, the Council explained that, although public origin tools such as community notes can address erroneous information, independent researchers have expressed concerns about their effectiveness. A report last year showed that publications with false electoral information about X, for example, did not show corrections proposals for community notes. They even accumulated billions of views. "The verification of facts serves as a vital safeguarding, particularly in regions of the world where wrong information feeds off the damage out of line already measure that the adoption of the grows worldwide," The Council wrote. "Goal should ensure that the new approaches mitigate the risks worldwide."
This article originally appeared in Engadget at https://www.engadget.com/social-media/meta-safety-advisory-coCil-sys-the-companys-moderation-changes-priorite-politics-Over-safety-140026965.html ? SRC = RSS
Discounts
Source link