Social networks wield great influence in crisis-hit areas – when violence erupts, for example. The emotions of users are stoked by hate speech, aggressive images and fake news. Situations can escalate rapidly – as in the case of Ethiopia’s Tigray conflict. So far, countermeasures taken by site operators have been inadequate. But how should we expect platforms to handle their content? What share of the responsibility lies with companies like Meta? And what does this all mean for the work of content moderators, which is so crucial to such processes?
|