For best experience please turn on javascript and use a modern browser!
You are using a browser that is no longer supported by Microsoft. Please upgrade your browser. The site may not present itself correctly if you continue browsing.
The UvA’s research priority area Human(e) AI has awarded seed funding to two research projects following its third annual call for funding. One project will investigate the conditions under which an autonomous agent should take the responsibility to act, the other project will examine AI transparency in brand-based communication.

Responsible artificial agency: a logical perspective

When should an artificial agent intervene to resolve a dilemma? And when should it alert its user or a relevant authority instead?

Given the growing number of safety-critical applications of autonomous systems in areas like medicine, engineering, surveillance, transportation and media, developing rigorous tools to determine when it is responsible for an agent to act becomes increasingly urgent.

The aim of this project is to develop logics for reasoning about the conditions under which an autonomous agent should take the responsibility to act.

  • We will first use logic as a meta-analytical tool to analyse artificial systems, and to formalise the main criteria for assessing when an AI intervention is responsible or not.
  • Our medium-term aim is to apply this analysis to developing fully formalised, decidable systems that can in principle be used by AI for its internal reasoning about its own and others’ actions and their consequences.
  • The long-term goal is to provide tools for designing intelligent agents that can act responsibly, after correctly determining if and when their intervention is needed.

Lead researchers

Dr Aybüke Özgün is an assistant professor at the UvA’s Institute for Logic, Language and Computation. The core of her research lies in formal epistemology, in particular in dynamic epistemic logic with a special focus on evidence-based knowledge and belief modelled on spatial/topological structures. Some of her other interests include logic and topology, epistemic learning theory, and belief revision.

Dr Ilaria Canavotto is a postdoctoral researcher at the UvA’s Institute for Logic, Language and Computation. Her current research mainly focuses on temporal logics of agency and deontic logics, in connection with the notions of causality, responsibility, and normative system.

Dr Alexandru Baltag is an associate professor at the UvA’s Institute for Logic, Language and Computation. He is known mostly for his work in logics for multi-agent information flow (in particular dynamic-epistemic logic) and their applications to communication, game theory, epistemology, social networks, belief dynamics etc. His other interests include non-wellfounded set theory, coalgebraic logic, formal learning theory, topological modal logic, the logical foundations of quantum mechanics and quantum computation.

Towards AI transparency in brand-based communication – evidence for better policymaking

This project addresses transparency concerns that arise from AI-driven communication in the field of trademarks and brands.

Brand messages are often generated with minimal or no human interference and distributed on the basis of consumer behavioural data via online platforms. With regard to this practice, proposed new EU legislation seeks to empower consumers by ensuring access to information on the selection criteria (parameter transparency: ‘Why me?’) and the source of the communication (source transparency: ‘Who sent this?’). Before adopting and potentially extending these legal rules to a broader spectrum of digital, virtual and augmented reality media environments, it is pivotal to understand whether these transparency disclosures would indeed be effective.

Communication science research shows that transparency disclosures may have limited or even conflicting effects. Examining consumer responses to transparency disclosures in multiple media environments, the project seeks to clarify whether transparency information reaches consumers, leads to desirable effects on trust, and encourages consumers to seek additional information on alternative offers. In answering these questions, the project aims to impact the policy debate surrounding the proposed new transparency legislation at EU level. It will also provide a compass for the establishment of appropriate responsible AI legal standards in the field of brand-based communication.

Lead researchers

Prof. Martin Senftleben is a professor of Intellectual Property Law at the UvA’s Institute for Information Law. His research focuses on platform and AI regulation in the EU.

Prof. Guda van Noort is a professor of Persuasion & New Media Technologies at the UvA’s Amsterdam School of Communication Research. Her research focuses on consumer responses to emergent media technologies and its content.

Prof. Edith Smit is a professor of Persuasive Communication at the UvA’s Amsterdam School of Communication Research. Her research focuses on persuasion and empowerment in the domain of marketing and media.

----