Chinese accounts on X pretending to be Americans are trying to cause division and disruption.

This article discusses Meta's challenge in dealing with Chinese state-backed information operations. It elaborates on the methods used, the targets, and the implications on global digital platforms.

1. Incident Overview.

In December 2021, Meta, parent company of Facebook, disclosed it had discovered coordinated inauthentic behavior emerging from China. According to the company's cybersecurity policy team leader, Ben Nimmo, the culprits were believed to be Chinese government associates using the platform to spread disinformation and tarnish rival countries' reputations.

Why did Twitter (X) lose 56% of its value in just one year?
Related Article

Their campaign pushed content aimed at undermining the United States and its allies. It was believed to target an international audience, with India drawing the most attention, evidence highlighted in Graphika report.

Chinese accounts on X pretending to be Americans are trying to cause division and disruption. ImageAlt

While China had been previously mentioned as a source of disinformation on Meta's platforms, this marked the first major public signal that China's aggressive strategy wasn't confined to domestic audiences or local platforms.

2. Exploiting Buddhist Imagery.

The core of the Chinese operation was an extensive network of faux accounts and pages. They used stock images or AI-generated profiles to appear legitimate, and the content shared was meticulously planned and curated. A large chunk of this disinformation campaign relied on exploiting Buddhist imagery and messaging.

The pages would post Buddhist teachings, artwork, and philosophical musings to gather followers. Then, in between these posts, they would include subtle political propaganda, expertly blending it with benign content.

The operation was not confined to Facebook. Platforms like YouTube and Twitter were also leveraged, intertwining them to maximize the message's reach and impact. By concealing hostile operations within seemingly benign content, the campaign's risk of exposure was significantly reduced.

Paris to charge non-locals $20/hr to park big cars in city center to tackle pollution & ensure pedestrian safety. #ParisVotes
Related Article

3. Targeting International Audiences.

Although most of the content was in English, it was intended for non-native speakers. The poorly translated text, chock-full of awkward phrases and grammatical errors, appealed mostly to audiences in the Indian subcontinent and Southeast Asia. This was especially true in Sri Lanka, where a significant percentage of the population practices Buddhism.

India, a primary target, was portrayed as a threat to regional peace. Content created aimed to accuse Indian forces of aggression and chicanery – playing into the long-standing rivalry between India and China.

Additionally, narratives questioned the U.S. and its allies, focusing particularly on their controversial international conducts.

While the content was identified and removed, it illustrates the challenges in regulating these operations. It also shows the broader implications for global digital platforms caught in the crossfire of information warfare.

4. Coordinated Inauthentic Activity.

Beyond the aspect of disinformation, it’s important to note the coordinated and manipulative techniques used. The campaign involved coordinated inauthentic behavior – a phrase coined by Meta to describe attempts at manipulating public discourse through networks of fake accounts or coordinated sharing.

The operators would ‘like’ their own content, creating a false sense of engagement and popularity. The disinformation was also shared across different platforms and forums, increasing visibility and reach.

Furthermore, they took precautions to avoid detection by changing digital fingerprints, such as IP addresses or browser details.

These measures illustrate the depth of planning and execution involved in such information operations.

5. Countermeasures and Considerations.

One of the countermeasures employed by Meta was the creation of specialized teams to detect and combat such behavior. These teams investigate the actors involved, their motives, and the depth of the operation. Following these findings, content and accounts associated with the operation are removed.

However, this process only handles surface-level issues. It doesn’t address the root problem – the geopolitical motivations and implications of these operations.

There is also the challenge of keeping up with the increasingly complex nature of such campaigns, as attackers continually innovate and find new ways to exploit the platforms.

These are complex issues, and solving them will require not just technical solutions, but political and societal ones as well.

Categories