AI-driven Personalization in Digital Media: Political and Societal Implications

The fallout from disinformation and online manipulation strategies have alerted Western democracies to the novel, nuanced vulnerabilities of our information society. This paper outlines the implications of the adoption of AI by the the legacy media, as well as by the new media, focusing on personalization.

Research paper Updated 18 August 2021

Sophia Ignatidou

Former Academy Associate, International Security Programme

The Reuters and other news apps seen on an iPhone, 29 January 2019. Photo: Getty Images.

The Reuters and other news apps seen on an iPhone, 29 January 2019. Photo: Getty Images.

Summary

  • Machine learning (ML)-driven personalization is fast expanding from social media to the wider information space, encompassing legacy media, multinational conglomerates and digital-native publishers: however, this is happening within a regulatory and oversight vacuum that needs to be addressed as a matter of urgency.
  • Mass-scale adoption of personalization in communication has serious implications for human rights, societal resilience and political security. Data protection, privacy and wrongful discrimination, as well as freedom of opinion and of expression, are some of the areas impacted by this technological transformation.
  • Artificial intelligence (AI) and its ML subset are novel technologies that demand novel ways of approaching oversight, monitoring and analysis. Policymakers, regulators, media professionals and engineers need to be able to conceptualize issues in an interdisciplinary way that is appropriate for sociotechnical systems.
  • Funding needs to be allocated to research into human–computer interaction in information environments, data infrastructure, technology market trends, and the broader impact of ML systems within the communication sector.
  • Although global, high-level ethical frameworks for AI are welcome, they are no substitute for domain- and context-specific codes of ethics. Legacy media and digital-native publishers need to overhaul their editorial codes to make them fit for purpose in a digital ecosystem transformed by ML. Journalistic principles need to be reformulated and refined in the current informational context in order to efficiently inform the ML models built for personalized communication.
  • Codes of ethics will not by themselves be enough, so current regulatory and legislative frameworks as they relate to media need to be reassessed. Media regulators need to develop their in-house capacity for thorough research and monitoring into ML systems, and – when appropriate –proportionate sanctions for actors found to be employing such systems towards malign ends. Collaboration with data protection authorities, competition authorities and national electoral commissions is paramount for preserving the integrity of elections and of a political discourse grounded on democratic principles.
  • Upskilling senior managers and editorial teams is fundamental if media professionals are to be able to engage meaningfully and effectively with data scientists and AI engineers.