Facebook parent company Meta has granted the Kazakhstan government direct access to its content reporting system, as part of a joint agreement to work on removing content that is deemed harmful on social network platforms like Facebook and Instagram.
In a joint statement, the Ministry of Information and Social Development of the Republic of Kazakhstan and the social media giant said the agreement, which is the first of its kind in Central Asia, would help increase the efficiency and effectiveness to counter the spread of illegal content.
Giving the Kazakhstan government access to its content reporting system will allow the government to report content that may violate Facebook's global content policy and local content laws in Kazakhstan, Facebook said.
Under the agreement, both parties will also set up regular communication, including having an authorised representative from Facebook's regional office work with the Ministry on various policy issues.
"Facebook is delighted to work with the government of Kazakhstan together, particularly in the aspect of online safety for children," Facebook regional public policy director George Chen said in a statement.
"To make the first step for our long-term cooperation with the government, we are delighted to provide the 'content reporting system' to the government of Kazakhstan, which we hope can help the government to deal with harmful content in a more efficient and effective manner. The Facebook team will also continue to provide training to Kazakhstan to keep its cyberspace safe."
According to the pair, in preparation for giving the ministry access to its content reporting system, Facebook provided training for the ministry's specialists last month on how to use the content reporting system, as well as Facebook's content policy and community standards.
Aidos Sarym, one of the deputies who introduced a Bill into the Kazakhstan parliament in September to protect children from cyberbullying, described the agreement as a "win-win" situation.
"During these negotiations, everyone came to consensus. It's basically a classic win-win situation where our citizens will get more effective opportunities to protect their rights, and companies to grow their business," he wrote on his Facebook page.
"At the same time, we were and will be consistent. We are ready to remove the toughest wording and together with the government to develop and introduce formulas that will work will not infringe on user interests or the interests of tech companies themselves."
Just last week, Facebook whistleblower Frances Haugen warned the UK Parliament about social media platforms that use opaque algorithms to spread harmful content should be reined in. She said these algorithms could trigger a growing number of violent events, such as the attacks on the US Capitol Building that occurred last January.
Haugen was speaking in London as part of an investigation into the draft Online Safety Bill that was put forward by the UK government earlier this year. This Bill proposes to force companies to protect their users from harmful content ranging from revenge porn to disinformation, through hate speech and racist abuse.
Parliamentarians were taking evidence from Haugen because it was recently revealed that she was the whistleblower behind bombshell leaked internal documents from Facebook.
Now known as the Facebook Files, the leaks were published by The Wall Street Journal and explored a variety of topics, including the use of different content moderation policies for high-profile users, the spread of misinformation, and the impact of Instagram on teenagers' mental health. The disclosures became a catalyst for a US Senate inquiry into Facebook's operations.