Skip to main content

2 posts tagged with "zoom-calls"

View All Tags

· 4 min read
Denny George

Attendees : Aurora, Habeeb, Maya, Sneha, Ritash, Duha

Gendered Disinformation

Current help is needed in :

  • Literature review (Ritash is on it as of now but I guess more help doesn't hurt)
  • Identifying cases of online gendered disinformation

Media Literacy

There's a lot of interest in exploring non textual and interactive ways of translating media literacy content for various groups. This interest ties in with the discussion that happened yesterday as part of the Producing Content working group. I think this can bring together the storytellers, developers and media literacy researchers in this group. This is very aligned with the 2 games on media literacy that Tattle is working on. We also have Abhilash's team from Quint who might be interested in this so we can coordinate something on this here.

Habeeb suggested if we can narrow down the scope of these interventions to the upcoming elections. There is scope to pre-bunk things since its known that as the elections come closer, more misinformation related to the elections will come up. Can our interventions tie into training and preventing spread of election related misinfo. Maya suggested partnering with groups who might be already working on election adjacent issues like voter registration.

Habeeb shared that in his personal experience, the first time voter or young UG demographic responds best to these media literacy interventions. They are also likely to teach their skills to others in the family and more likely to be more tech savvy. Provides some food for thought in narrowing down the audience for any inteventions we work on.

We also discussed about how a lot of our media literacy interventions and interactive content might be on a new platform and not on existing social media. Their disemmination itself then becomes a challenge. The conversation about dissemination itself brough strong agreement on how this is itself a topic to tackle. Also aligns with the "Increasing reach of factchecking" working group.

MIA Data

Maya has created a directory of many data sources and has catalogued it. Accessible here The audience is Researchers but also anyone who is interested in fields like data journalism in the Indian context but doesn't know where to start.

Data Void

We were wondering if we should just buy a domain like "datavoid.in" to house a platform around the idea of "missing information" and the misinformation that it causes or can cause. There is an existing body of work around the idea of datavoids that we can draw inspiration from.

Ritash mentioned how the media itself could use access to information on marginalized groups for better reporting. Maybe thats the first target userbase for this platform/publication. We could start off with the work that Ritash and Duha have done to sensitize and train people around their communities. But maybe this platform can serve as our platform against all misinformation that arises or may arise due to missing info about a certain group or identity.

Next Steps :

  • Reach out to Akhil or Factly team and find ways to combine forces with their work. How can maya's directory be of their use? How can our MIA data group help in their work.
  • Come up with proposal(s) on how can we kickstart a focussed effort on experimenting with new forms for content.
  • Identify demographic(s) we want to focus on for the media literacy interventions.
  • Identify ideas within the problem of media disemmination and find ways to solve it.
  • Sneha will organize an open call by Meedan to share current statue of the project for the members of this group who are working on the gendered misinformation working group.
  • Brainstorm what datavoid.in could look like.

· 5 min read
Denny George
Yash Budhwar

Attendees : Abhilash, Kritika, Neelam, Ritash, Shalini, Uzair, Aurora

We used the time to discuss ideas about related to a few working groups. We also discussed what we had to offer. We were able to finalize some doable project idea.

Gendered Disinformation

We were able to identify very concrete ways to make progress on this. Meedan is working on gendered disinformation with the following end goals :

  • Defining Gendered Disinformation
  • Identifying what kind of cases should we document
    • These cases are largely underreported in mainstream media
    • Better work is done by factcheckers and community media groups
  • How can tech be used to understand gendered disinfo in South East Asia?
    • ML can be used to understand trends of gendered disinformation and how people are impacted by it
  • Share insights from these process and share with tech platforms and policy makers

Available Help

  • Kritika can help with documenting the cases of gendered disinformation and policy related thigns
  • Aurora can help with ML work
  • Tattle's Uli project is concerned with developing ML and tech for OGBV detection.
  • Abhilash

Next Steps

  • Shalini will collate a list of resources of the work done on this in the western context
  • Ritash will make available what knowledge/material/works already exists that is rooted in the communities in India and are already regional/localised.
  • Abhilash paired with an ML developer can look at what kind of data is available and what tech needs to be built to serve the goals of this group
  • Shalini and Sneha will let us know after the workshops they are having (jul,aug) what are the logistics around combining this group's effort with the work planned at Meedan.

Zombie Claims

reminder : This is about the problem of previously factchecked misinformation reappearing in different avatars (modified claim, slightly modified image, in a different language etc)

While we were not able to settle on an acceptable solution to tackling this problem, we did identify more granular problems and hopefully this can be something we use when discussing this further in the subsequent calls.

  1. Platform Failure : It looks like there is a consensus that at the end of it all this problem will only ever be truly addressed when platforms like meta implement some kind of mechanism that kicks in when a previously fact checked item reappears on their site
  2. Improvement in similarity matching : A solution that is able to detect similarity in claims even if the text is slightly modified or is in a different language.

Some ideas came up with their own limitations. I'm listing them here just for the sake of completeness and to facilitate future discussions.

  • Archiving media from social media along with its context so it can be used for future matching
    • eg cyclone videos from the past will reappear with some false claims everytime a new cyclone happens
  • Auto replying happens on chat but not on public platforms like facebook/twitter. Could they be enabled on comment section.

Since there is a sense that all efforts are in vain till platforms fix thigns on their end, the impulse to jump into a tech solution or project might not be very useful here. We will wait for something else to emerge.

Next Steps :

  • Abhilash will provide some examples of zombie claims. I'm hoping having these concrete examples can inspire some thoughts or action amongst the various groups.
  • Ritash suggested maybe one or two specific zombie claims could be the starting point of inquiry on why these happen and also a way to look at the larger underlying narratives behind it, which could then reveal ways to address them well.

Producing Content

Abhilash :

  • We are experimenting with text, image and social cards but there is scope for increasing experimentation
  • Tap into the viral video market
    • Understanding platform algorithms and using them for amplifying our work

Kritika:

  • We've tried playing around with the format of the text piece. Our data says people aren't reading long form content so we've tried changing the tone, incorporating formats like QnA. Also be mindful of tone (not to use scary tone in health related factchecks)
  • We did a year long campaign with orgs like khabar lahariya around covid vaccine related misinformation. Used jingles and audio in Bhojpuri and Assamese. It connected well with the audience and we got positive feedback.

Ritash:

  • Audio/Video content breaks literacy barriers and is more engaging.
  • Pure audio content is appealing to people who want to preserve anonymity (eg: sex workers)

Next Steps

  • I think there's a lot of scope for involving writers or content creators into our group and experiment with creating content that uses the work done by factcheckers as material to create (more) engaging content. Beyond just translating the factcheck articles into video, I think it would be nice to experiment with platform features and trends to figure out how to make serious content like this be engaging and go viral.
  • We could try pairing some content creators with factcheckers and experiment with creating content on Long Standing Misinformations and Existing Narratives. This might then also overlap with the concerns of the 'Zombie Misinformation' working group.

Next Call

We are continuing these calls and hope to finalize more projects and account for everyone's feedback, interests and skills along the way. Please join to make your ideas heard or add to the the existing ones.