By Rishi Iyengar Source : CNN Business
WhatsApp has a new strategy for tackling misinformation during India’s election: crowd sourcing.
The company is offering its 200 million users in India a new tip line where they can send in messages, pictures or videos they want fact-checked. A “verification center” will respond to the user, indicating whether the information is true, false, misleading or disputed.
The Facebook-owned messaging app has developed the tip line, known as Checkpoint, in partnership with an Indian startup. It will be available in English and four Indian languages — Hindi, Telugu, Bengali and Malayalam.
WhatsApp is hoping to use the new service to broaden its fight against fake news during an election that’s viewed as a key test of whether social media platforms can prevent the spread of rumors and hoaxes.

“The challenge of viral misinformation requires more collaborative efforts and cannot be solved by any one organization alone,” the company said in a statement on Tuesday.
WhatsApp, Facebook (FB) and other social networks like Twitter (TWTR) have taken several steps to try and prevent their platforms from being used to spread misinformation during India’s election, the world’s biggest exercise in democracy. They’ve limited message forwarding, banned fake accounts, labeled political advertising and partnered with fact-checking websites.
But with 900 million people eligible to vote and more than 560 million internet users, the tech companies may be fighting a losing battle.
WhatsApp has a powerful platform in India. Sometimes too powerful.
Last year, a spate of lynchings triggered by viral hoax messages on its service put the company at the center of a debate about misinformation in the country, where it has more than 200 million users.
Now it’s bracing for India’s upcoming national elections, the biggest in the world.
WhatsApp is deploying artificial intelligence to clean up its platform ahead of the elections, in which more than 800 million Indians are eligible to vote. It’s also warning India’s political parties against spreading politically-motivated spam messages.
The Facebook (FB)-owned app is using AI tools to detect and ban accounts that spread “problematic content” through mass messaging, it said in a statement on Wednesday.
WhatsApp’s automated systems helped it ban more than 6 million accounts globally in the last three months. The systems monitor and flag suspicious behavior like bulk registrations of similar accounts and users that send a high volume of messages in a short amount of time.
“These efforts are particularly important during elections where certain groups may attempt to send messages at scale,” WhatsApp said.
The company has also warned Indian political parties that their accounts could be blocked if they try to abuse the platform during the campaign.
“We saw how parties tried to reach people over WhatsApp, and in some cases that involved attempting to use WhatsApp in a way that it was not intended to be used,” spokesperson Carl Woog told reporters in New Delhi, referring to a recent Indian state election.
“We have engaged with political parties to explain our firm view that WhatsApp is not a broadcast platform and is not a place to send messages at scale, and to explain to them that we will be banning accounts that engage in [suspicious] behavior,” he added.
WhatsApp’s reputation in India, its biggest global market, has been dented by the mob violence and the misinformation spread on its platform.
The company last year attempted to stem fake rumors by labeling messages that are forwarded rather than composed by the sender, and by imposing limits on the number of simultaneous chats a message can be forwarded to.
It has also tried to raise awareness about misinformation by taking out newspaper, radio and television ads, targeting the hundreds of millions of Indians discovering the internet for the first time. The campaign, called “Share Joy, Not Rumors,” will be rolled out to other countries, including Brazil and Indonesia.
There are limits to how far WhatsApp is willing to go to stamp out abuse. It has pushed back against Indian government demands to trace individual “harmful” messages, and hit out against proposed tech regulations that would require online platforms to take down “unlawful” content within 24 hours.
“The proposed changes are over broad and are not consistent with the strong privacy protections that are important to people everywhere, not just in India but around the world,” Woog said.
“What is contemplated by the rules is not possible today, given the end-to-end encryption that we provide, and it would require us to re-architect WhatsApp, leading to a different product,” he added.