The world must address the “grave global harm” caused by the proliferation of hate and lies in the digital space, United Nations Secretary-General António Guterres said at the launch of his report into information integrity on digital platforms.
Alarm over the potential threat posed by the rapid development of generative artificial intelligence must not obscure the damage already being done by digital technologies that enable the spread of online hate speech, mis- and disinformation, stressed the 山chief.
Digital platforms have brought many benefits, supporting communities in times of crisis and struggle, elevating marginalized voices and helping to mobilize global movements for racial justice and gender equality. They help the 山to engage people around the world in pursuit of peace, dignity and human rights on a healthy planet.
This clear and present global threat demands coordinated international action to make the digital space safer and more inclusive while vigorously protecting human rights.
Yet these same digital platforms are being misused to subvert science and spread disinformation and hate to billions of people, fueling conflict, threatening democracy and human rights, and undermining public health and climate action.
This clear and present global threat demands coordinated international action to make the digital space safer and more inclusive while vigorously protecting human rights.
Existing responses have, to a large extent, been lacking. Some tech companies have done far too little, too late to prevent their platforms from contributing to the spread of violence and hatred, while Governments have sometimes resorted to drastic measures – including blanket internet shutdowns and bans – that lack any legal basis and infringe on human rights.
The policy brief puts forward the framework for a concerted global response though a Code of Conduct for information integrity on digital platforms, outlining potential guardrails to contain this runaway threat while safeguarding freedom of expression and information.
It includes the following proposals to be built on in a Code of Conduct:
Governments, tech companies and other stakeholders should refrain from using, supporting, or amplifying disinformation and hate speech for any purpose.
Governments should guarantee a free, viable, independent, and plural media landscape, with strong protections for journalists.
Digital platforms should ensure safety and privacy by design in all products, alongside consistent application of policies and resources across countries and languages.
All stakeholders should take urgent and immediate measures to ensure that all AI applications are safe, secure, responsible and ethical, and comply with human rights obligations.
Some tech companies have done far too little, too late to prevent their platforms from contributing to the spread of violence and hatred, while Governments have sometimes resorted to drastic measures – including blanket internet shutdowns and bans – that lack any legal basis and infringe on human rights.
Tech companies should move away from business models that prioritize engagement above human rights, privacy, and safety.
Advertisers and digital platforms should ensure that ads are not placed next to online mis- or disinformation or hate speech, and that ads containing disinformation are not promoted.
Digital platforms should ensure meaningful transparency and allow researchers and academics access to data, while respecting user privacy.
The policy brief is the latest in a series based on proposals contained in Our Common Agenda, the Secretary-General’s 2021 report that outlines a vision for future global cooperation and multilateral action.
Together, the briefs are intended to inform discussions ahead of the SDG Summit in September, marking the midpoint towards achieving the Sustainable Development Goals, and the related Summit of the Future next year.
The policy brief is available at