aster.cloud aster.cloud
  • /
  • Platforms
    • Public Cloud
    • On-Premise
    • Hybrid Cloud
    • Data
  • Architecture
    • Design
    • Solutions
    • Enterprise
  • Engineering
    • Automation
    • Software Engineering
    • Project Management
    • DevOps
  • Programming
    • Learning
  • Tools
  • About
  • /
  • Platforms
    • Public Cloud
    • On-Premise
    • Hybrid Cloud
    • Data
  • Architecture
    • Design
    • Solutions
    • Enterprise
  • Engineering
    • Automation
    • Software Engineering
    • Project Management
    • DevOps
  • Programming
    • Learning
  • Tools
  • About
aster.cloud aster.cloud
  • /
  • Platforms
    • Public Cloud
    • On-Premise
    • Hybrid Cloud
    • Data
  • Architecture
    • Design
    • Solutions
    • Enterprise
  • Engineering
    • Automation
    • Software Engineering
    • Project Management
    • DevOps
  • Programming
    • Learning
  • Tools
  • About
  • Technology

If Big Tech Has The Will, Here Are Ways Research Shows Self-Regulation Can Work

  • root
  • February 27, 2021
  • 5 minute read

Governments and observers across the world have repeatedly raised concerns about the monopoly power of Big Tech companies and the role the companies play in disseminating misinformation. In response, Big Tech companies have tried to preempt regulations by regulating themselves.

With Facebook’s announcement that its Oversight Board will make a decision about whether former President Donald Trump can regain access to his account after the company suspended it, this and other high-profile moves by technology companies to address misinformation have reignited the debate about what responsible self-regulation by technology companies should look like.


Partner with aster.cloud
for your next big idea.
Let us know here.



From our partners:

CITI.IO :: Business. Institutions. Society. Global Political Economy.
CYBERPOGO.COM :: For the Arts, Sciences, and Technology.
DADAHACKS.COM :: Parenting For The Rest Of Us.
ZEDISTA.COM :: Entertainment. Sports. Culture. Escape.
TAKUMAKU.COM :: For The Hearth And Home.
ASTER.CLOUD :: From The Cloud And Beyond.
LIWAIWAI.COM :: Intelligence, Inside and Outside.
GLOBALCLOUDPLATFORMS.COM :: For The World's Computing Needs.
FIREGULAMAN.COM :: For The Fire In The Belly Of The Coder.
ASTERCASTER.COM :: Supra Astra. Beyond The Stars.
BARTDAY.COM :: Prosperity For Everyone.

Research shows three key ways social media self-regulation can work: deprioritize engagement, label misinformation and crowdsource accuracy verification.

 

Deprioritize engagement

Social media platforms are built for constant interaction, and the companies design the algorithms that choose which posts people see to keep their users engaged. Studies show falsehoods spread faster than truth on social media, often because people find news that triggers emotions to be more engaging, which makes it more likely they will read, react to and share such news. This effect gets amplified through algorithmic recommendations. My own work shows that people engage with YouTube videos about diabetes more often when the videos are less informative.

Most Big Tech platforms also operate without the gatekeepers or filters that govern traditional sources of news and information. Their vast troves of fine-grained and detailed demographic data give them the ability to “microtarget” small numbers of users. This, combined with algorithmic amplification of content designed to boost engagement, can have a host of negative consequences for society, including digital voter suppression, the targeting of minorities for disinformation and discriminatory ad targeting.

Deprioritizing engagement in content recommendations should lessen the “rabbit hole” effect of social media, where people look at post after post, video after video. The algorithmic design of Big Tech platforms prioritizes new and microtargeted content, which fosters an almost unchecked proliferation of misinformation. Apple CEO Tim Cook recently summed up the problem: “At a moment of rampant disinformation and conspiracy theories juiced by algorithms, we can no longer turn a blind eye to a theory of technology that says all engagement is good engagement – the longer the better – and all with the goal of collecting as much data as possible.”

Read More  Meta And Microsoft Introduce The Next Generation Of Llama
Apple CEO Tim Cook criticized social media companies for prioritizing engagement over battling misinformation. 

Label misinformation

The technology companies could adopt a content-labeling system to identify whether a news item is verified or not. During the election, Twitter announced a civic integrity policy under which tweets labeled as disputed or misleading would not be recommended by their algorithms. Research shows that labeling works. Studies suggest that applying labels to posts from state-controlled media outlets, such as from the Russian media channel RT, could mitigate the effects of misinformation.

In an experiment, researchers hired anonymous temporary workers to label trustworthy posts. The posts were subsequently displayed on Facebook with labels annotated by the crowdsource workers. In that experiment, crowd workers from across the political spectrum were able to distinguish between mainstream sources and hyperpartisan or fake news sources, suggesting that crowds often do a good job of telling the difference between real and fake news.

Experiments also show that individuals with some exposure to news sources can generally distinguish between real and fake news. Other experiments found that providing a reminder about the accuracy of a post increased the likelihood that participants shared accurate posts more than inaccurate posts.

In my own work, I have studied how combinations of human annotators, or content moderators, and artificial intelligence algorithms – what is referred to as human-in-the-loop intelligence – can be used to classify health care-related videos on YouTube. While it is not feasible to have medical professionals watch every single YouTube video on diabetes, it is possible to have a human-in-the-loop method of classification. For example, my colleagues and I recruited subject-matter experts to give feedback to AI algorithms, which results in better assessments of the content of posts and videos.

Read More  Top 8 Compact Electric Vehicles for City Driving

Tech companies have already employed such approaches. Facebook uses a combination of fact-checkers and similarity-detection algorithms to screen COVID-19-related misinformation. The algorithms detect duplications and close copies of misleading posts.

 

Community-based enforcement

Twitter recently announced that it is launching a community forum, Birdwatch, to combat misinformation. While Twitter hasn’t provided details about how this will be implemented, a crowd-based verification mechanism adding up votes or down votes to trending posts and using newsfeed algorithms to down-rank content from untrustworthy sources could help reduce misinformation.

The basic idea is similar to Wikipedia’s content contribution system, where volunteers classify whether trending posts are real or fake. The challenge is preventing people from up-voting interesting and compelling but unverified content, particularly when there are deliberate efforts to manipulate voting. People can game the systems through coordinated action, as in the recent GameStop stock-pumping episode.

Another problem is how to motivate people to voluntarily participate in a collaborative effort such as crowdsourced fake news detection. Such efforts, however, rely on volunteers annotating the accuracy of news articles, akin to Wikipedia, and also require the participation of third-party fact-checking organizations that can be used to detect if a piece of news is misleading.

However, a Wikipedia-style model needs robust mechanisms of community governance to ensure that individual volunteers follow consistent guidelines when they authenticate and fact-check posts. Wikipedia recently updated its community standards specifically to stem the spread of misinformation. Whether the big-tech companies will voluntarily allow their content moderation policies to be reviewed so transparently is another matter.

 

Read More  Sony Electronics Announces New Airpeak S1 Professional Drone

Big Tech’s responsibilities

Ultimately, social media companies could use a combination of deprioritizing engagement, partnering with news organizations, and AI and crowdsourced misinformation detection. These approaches are unlikely to work in isolation and will need to be designed to work together.

Coordinated actions facilitated by social media can disrupt society, from financial markets to politics. The technology platforms play an extraordinarily large role in shaping public opinion, which means they bear a responsibility to the public to govern themselves effectively.

Calls for government regulation of Big Tech are growing all over the world, including in the U.S., where a recent Gallup poll showed worsening attitudes toward technology companies and greater support for governmental regulation. Germany’s new laws on content moderation push greater responsibility on tech companies for the content shared on their platforms. A slew of regulations in Europe aimed at reducing the liability protections enjoyed by these platforms and proposed regulations in the U.S. aimed at restructuring internet laws will bring greater scrutiny to tech companies’ content moderation policies.

Some form of government regulation is likely in the U.S. Big Tech still has an opportunity to engage in responsible self-regulation – before the companies are compelled to act by lawmakers.The Conversation

Anjana Susarla, Omura-Saxena Professor of Responsible AI, Michigan State University

This article is republished from The Conversation under a Creative Commons license.


For enquiries, product placements, sponsorships, and collaborations, connect with us at [email protected]. We'd love to hear from you!

Our humans need coffee too! Your support is highly appreciated, thank you!

root

Related Topics
  • BigTech
  • Facebook
  • GameStop
  • Social Media
  • Tech
  • Twitter
You May Also Like
Getting things done makes her feel amazing
View Post
  • Computing
  • Data
  • Featured
  • Learning
  • Tech
  • Technology

Nurturing Minds in the Digital Revolution

  • April 25, 2025
View Post
  • People
  • Technology

AI is automating our jobs – but values need to change if we are to be liberated by it

  • April 17, 2025
View Post
  • Software
  • Technology

Canonical Releases Ubuntu 25.04 Plucky Puffin

  • April 17, 2025
View Post
  • Computing
  • Public Cloud
  • Technology

United States Army Enterprise Cloud Management Agency Expands its Oracle Defense Cloud Services

  • April 15, 2025
View Post
  • Technology

Tokyo Electron and IBM Renew Collaboration for Advanced Semiconductor Technology

  • April 2, 2025
View Post
  • Software
  • Technology

IBM Accelerates Momentum in the as a Service Space with Growing Portfolio of Tools Simplifying Infrastructure Management

  • March 27, 2025
View Post
  • Technology

IBM contributes key open-source projects to Linux Foundation to advance AI community participation

  • March 22, 2025
View Post
  • Technology

Co-op mode: New partners driving the future of gaming with AI

  • March 22, 2025

Stay Connected!
LATEST
  • college-of-cardinals-2025 1
    The Definitive Who’s Who of the 2025 Papal Conclave
    • May 7, 2025
  • conclave-poster-black-smoke 2
    The World Is Revalidating Itself
    • May 6, 2025
  • 3
    Conclave: How A New Pope Is Chosen
    • April 25, 2025
  • Getting things done makes her feel amazing 4
    Nurturing Minds in the Digital Revolution
    • April 25, 2025
  • 5
    AI is automating our jobs – but values need to change if we are to be liberated by it
    • April 17, 2025
  • 6
    Canonical Releases Ubuntu 25.04 Plucky Puffin
    • April 17, 2025
  • 7
    United States Army Enterprise Cloud Management Agency Expands its Oracle Defense Cloud Services
    • April 15, 2025
  • 8
    Tokyo Electron and IBM Renew Collaboration for Advanced Semiconductor Technology
    • April 2, 2025
  • 9
    IBM Accelerates Momentum in the as a Service Space with Growing Portfolio of Tools Simplifying Infrastructure Management
    • March 27, 2025
  • 10
    Tariffs, Trump, and Other Things That Start With T – They’re Not The Problem, It’s How We Use Them
    • March 25, 2025
about
Hello World!

We are aster.cloud. We’re created by programmers for programmers.

Our site aims to provide guides, programming tips, reviews, and interesting materials for tech people and those who want to learn in general.

We would like to hear from you.

If you have any feedback, enquiries, or sponsorship request, kindly reach out to us at:

[email protected]
Most Popular
  • 1
    IBM contributes key open-source projects to Linux Foundation to advance AI community participation
    • March 22, 2025
  • 2
    Co-op mode: New partners driving the future of gaming with AI
    • March 22, 2025
  • 3
    Mitsubishi Motors Canada Launches AI-Powered “Intelligent Companion” to Transform the 2025 Outlander Buying Experience
    • March 10, 2025
  • PiPiPi 4
    The Unexpected Pi-Fect Deals This March 14
    • March 13, 2025
  • Nintendo Switch Deals on Amazon 5
    10 Physical Nintendo Switch Game Deals on MAR10 Day!
    • March 9, 2025
  • /
  • Technology
  • Tools
  • About
  • Contact Us

Input your search keywords and press Enter.