Attendees : Abhilash, Kritika, Neelam, Ritash, Shalini, Uzair, Aurora
We used the time to discuss ideas about related to a few working groups. We also discussed what we had to offer. We were able to finalize some doable project idea.
Gendered Disinformation
We were able to identify very concrete ways to make progress on this. Meedan is working on gendered disinformation with the following end goals :
- Defining Gendered Disinformation
- Identifying what kind of cases should we document
- These cases are largely underreported in mainstream media
- Better work is done by factcheckers and community media groups
- How can tech be used to understand gendered disinfo in South East Asia?
- ML can be used to understand trends of gendered disinformation and how people are impacted by it
- Share insights from these process and share with tech platforms and policy makers
Available Help
- Kritika can help with documenting the cases of gendered disinformation and policy related thigns
- Aurora can help with ML work
- Tattle's Uli project is concerned with developing ML and tech for OGBV detection.
- Abhilash
Next Steps
- Shalini will collate a list of resources of the work done on this in the western context
- Ritash will make available what knowledge/material/works already exists that is rooted in the communities in India and are already regional/localised.
- Abhilash paired with an ML developer can look at what kind of data is available and what tech needs to be built to serve the goals of this group
- Shalini and Sneha will let us know after the workshops they are having (jul,aug) what are the logistics around combining this group's effort with the work planned at Meedan.
Zombie Claims
reminder : This is about the problem of previously factchecked misinformation reappearing in different avatars (modified claim, slightly modified image, in a different language etc)
While we were not able to settle on an acceptable solution to tackling this problem, we did identify more granular problems and hopefully this can be something we use when discussing this further in the subsequent calls.
- Platform Failure : It looks like there is a consensus that at the end of it all this problem will only ever be truly addressed when platforms like meta implement some kind of mechanism that kicks in when a previously fact checked item reappears on their site
- Improvement in similarity matching : A solution that is able to detect similarity in claims even if the text is slightly modified or is in a different language.
Some ideas came up with their own limitations. I'm listing them here just for the sake of completeness and to facilitate future discussions.
- Archiving media from social media along with its context so it can be used for future matching
- eg cyclone videos from the past will reappear with some false claims everytime a new cyclone happens
- Auto replying happens on chat but not on public platforms like facebook/twitter. Could they be enabled on comment section.
Since there is a sense that all efforts are in vain till platforms fix thigns on their end, the impulse to jump into a tech solution or project might not be very useful here. We will wait for something else to emerge.
Next Steps :
- Abhilash will provide some examples of zombie claims. I'm hoping having these concrete examples can inspire some thoughts or action amongst the various groups.
- Ritash suggested maybe one or two specific zombie claims could be the starting point of inquiry on why these happen and also a way to look at the larger underlying narratives behind it, which could then reveal ways to address them well.
Producing Content
Abhilash :
- We are experimenting with text, image and social cards but there is scope for increasing experimentation
- Tap into the viral video market
- Understanding platform algorithms and using them for amplifying our work
Kritika:
- We've tried playing around with the format of the text piece. Our data says people aren't reading long form content so we've tried changing the tone, incorporating formats like QnA. Also be mindful of tone (not to use scary tone in health related factchecks)
- We did a year long campaign with orgs like khabar lahariya around covid vaccine related misinformation. Used jingles and audio in Bhojpuri and Assamese. It connected well with the audience and we got positive feedback.
Ritash:
- Audio/Video content breaks literacy barriers and is more engaging.
- Pure audio content is appealing to people who want to preserve anonymity (eg: sex workers)
Next Steps
- I think there's a lot of scope for involving writers or content creators into our group and experiment with creating content that uses the work done by factcheckers as material to create (more) engaging content. Beyond just translating the factcheck articles into video, I think it would be nice to experiment with platform features and trends to figure out how to make serious content like this be engaging and go viral.
- We could try pairing some content creators with factcheckers and experiment with creating content on Long Standing Misinformations and Existing Narratives. This might then also overlap with the concerns of the 'Zombie Misinformation' working group.
Next Call
We are continuing these calls and hope to finalize more projects and account for everyone's feedback, interests and skills along the way. Please join to make your ideas heard or add to the the existing ones.