Videnskab
 science >> Videnskab >  >> Elektronik

Hvordan sociale mediefirmaer modererer deres indhold

Kredit:Unsplash/CC0 Public Domain

Moderering af indhold er en delikat balancegang for sociale medieplatforme, der forsøger at udvide deres brugerbase. Større platforme som Facebook og Twitter, som tjener det meste af deres overskud fra annoncering, har ikke råd til at miste øjnene eller engagementet på deres websteder. Alligevel er de under et enormt offentligt og politisk pres for at stoppe desinformation og fjerne skadeligt indhold. I mellemtiden vil mindre platforme, der henvender sig til bestemte ideologier, hellere lade ytringsfriheden råde.

I deres kommende papir, med titlen "Implications of Revenue Models and Technology for Content Moderation Strategies", viser Wharton marketingprofessorer Pinar Yildirim og Z. John Zhang og Wharton ph.d.-kandidat Yi Liu, hvordan et socialt mediefirmas indholdsmodereringsstrategi primært påvirkes af dens indtægtsmodel. En platform under annoncering er mere tilbøjelig til at moderere sit indhold end en under abonnement, men den modereres mindre aggressivt end sidstnævnte, når den gør det. I det følgende essay diskuterer forfatterne deres forskning og dens implikationer for politiske beslutningstagere, der ønsker at regulere sociale medieplatforme.

Hver dag deler millioner af brugere rundt om i verden deres forskellige synspunkter på sociale medieplatforme. Ikke alle disse synspunkter er i harmoni. Nogle betragtes som stødende, skadelige, endda ekstreme. Med forskellige meninger er forbrugerne i konflikt:På den ene side ønsker de frit at udtrykke deres synspunkter om igangværende politiske, sociale og økonomiske spørgsmål på sociale medieplatforme uden indblanding og uden at få at vide, at deres synspunkter er upassende. På den anden side, når andre udtrykker deres synspunkter frit, kan de betragte noget af indholdet som upassende, ufølsomt, skadeligt eller ekstremt og vil have det fjernet. Desuden er forbrugerne ikke altid enige om, hvilke indlæg der er stødende, eller hvilke handlinger sociale medieplatforme skal tage. Ifølge en undersøgelse foretaget af Morningconsult, for eksempel, ønsker 80 % af de adspurgte at se hadefulde ytringer – såsom indlæg, der bruger bagtalelser mod en race-, religiøs eller kønsgruppe – fjernet, 73 % ønsker at se videoer, der viser voldelige forbrydelser fjernet, og 66 % ønsker at se skildringer af seksuelle handlinger fjernet.

Sociale medieplatforme står over for en udfordring ved at fungere som internettets vogtere, mens de samtidig er centrum for selvudfoldelse og brugergenereret indhold. Indsatsen for moderation af indhold optager faktisk betydelige ressourcer hos virksomheder. Alene Facebook har forpligtet sig til at allokere 5 % af firmaets omsætning, 3,7 milliarder dollars, til moderation af indhold, et beløb større end Twitters samlede årlige omsætning. Alligevel ser hverken forbrugere eller regulatorer ud til at være tilfredse med deres indsats. I en eller anden form skal virksomheder beslutte, hvordan indholdet skal modereres for at beskytte individuelle brugere og deres interesser. Skal følsomt indhold fjernes fra internettet? Eller skal ytringsfriheden herske frit, hvilket angiver, at alle er frie til at poste, hvad de vil, og det er forbrugerens beslutning om at til- eller framelde sig denne frie ytringsfri verden? Nedtagning af nogens indhold reducerer denne brugers (og nogle andre brugeres) nydelse af siden, mens ikke at fjerne det kan også støde andre. Therefore, in terms of a social media platform's economic incentives, content moderation can affect user engagement, which ultimately can affect the platform's profitability.

Moderating Content, Maximizing Profits

In our forthcoming paper, "Implications of Revenue Models and Technology for Content Moderation Strategies," we study how social media platforms driven by profits may or may not moderate online content. We take into account the considerable user heterogeneity and different revenue models that platforms may have, and we derive the platform's optimal content moderation strategy that maximizes revenue.

When different social media platforms moderate content, the most significant determinant is their bottom line. This bottom line may rely heavily on advertising, or delivering eyeballs to advertisers, or the subscription fees that individual consumers are paying. But there is a stark contrast between the two revenue models. While advertising relies on delivering many, many eyeballs to advertisers, subscription revenues depend on being able to attract paying customers. As a result of the contrast, the content moderation policy in an effort to retain consumers also looks different under advertising vs. subscription. Social media platforms running on advertising revenue are more likely to conduct content moderation but with lax community standards in order to retain a larger group of consumers, compared to platforms with subscription revenue. Indeed, subscription-based platforms like Gab and MeWe are less likely to do content moderation, claiming free speech for their users.

A second important factor in content moderation is the quality of the content moderation technology. A significant volume of content moderation is carried out with the help of computers and artificial intelligence. Why, then, do social media executives claim the technology is not sufficient? When asked about content moderation, most executives at Facebook emphasize that they care a lot about content moderation and allocate large amounts of firm revenue to the task.

We find that a self-interested social media platform does not always benefit from technological improvement. In particular, a platform whose main source of revenue is from advertising may not benefit from better technology, because less accurate technology creates a porous community with more eyeballs. This finding suggests that content moderation on online platforms is not merely an outcome of their technological capabilities, but their economic incentives.

The findings from the paper overall cast doubt on whether social media platforms will always remedy the technological deficiencies on their own. We take our analysis one step further and compare the content moderation strategy for a self-interested platform with that for a social planner, which is a government institution or similar acting body that sets rules for the betterment of societal welfare. A social planner will use content moderation to prune any user who contributes negatively to the total utility of society, whereas a self-interested platform may keep some of these users, if it serves its interests. Perhaps counter to lay beliefs, we find that a self-interested platform is more likely to conduct content moderation than a social planner, which indicates that individual platforms have more incentives to moderate their content compared to the government.

However, more incentives do not mean right incentives. When conducting content moderation, a platform under advertising will be less strict than a social planner, while a platform under subscription will be stricter than a social planner. Moreover, a social planner will always push for perfect technology when the cost of developing technology is not an issue. Only a platform under subscription will have its interest aligned with a social planner in perfecting the technology for content moderation. These conclusions overall demonstrate that there is room for government regulations, and when they are warranted, they need to be differentiated with regard to the revenue model a platform adopts.

Varme artikler