top of page
佳璇 刘

Dr. Claudio Novelli: Navigating Ethical and Accountability Challenges of AI in Public Services

Updated: Oct 8

In May 2024, the Saint Pierre International Security Center (SPCIS) launched a series titled “Global Tech Policy at the Forefront: Conversations with Leading Experts.” This initiative explores the impact of emerging technologies like Artificial Intelligence (AI), blockchain, biometrics, and robotics on global governance and transnational public policy. Featuring prominent figures from academia and industry, the interviews are published on SPCIS’s platforms, including WeChat, their website, and LinkedIn.

 

On August 25th, we had the privilege of interviewing Dr. Claudio Novelli, a Postdoctoral Research Fellow at the University of Bologna's Department of Legal Studies and an International Fellow at Yale University’s Digital Ethics Center. Dr. Novelli’s research spans legal philosophy, political philosophy, social ontology, digital ethics, economic analysis of law, theory of justice, and metaethics. He has produced influential publications on AI risk assessment, ethics, and regulation. This article summarizes the key messages from the interview, highlighting Dr. Novelli’s insights into the challenges posed by AI technologies, including regulatory risks in AI governance, accountability in AI services, and the European AI Act. The content has been reviewed and authorized for publication by Dr. Novelli.

 

Dr. Feng Naikang: Governments worldwide are increasingly using AI systems for public services. (a) What ethical and accountability challenges might arise from this practice? (b) Where do accountability gaps originate, and (c) how might these gaps hinder the deployment of AI for social good?

 

Dr. Claudio Novelli: There are several levels at which ethical and regulatory issues must be considered when implementing AI in public services. At the broadest level, governance issues arise concerning how competencies and responsibilities are distributed among institutional actors and the extent of their powers.

 

More specific challenges include traditional concerns such as privacy, transparency, cybersecurity, and liability. A particularly pressing issue is how and where data are collected. When a public entity conducts these activities, it is imperative that data gathering and processing adhere to the highest standards. Failure to do so could lead to significant harm to the public good, particularly in sectors like the administration of justice, where there is no competition with private actors. Trust is at the core of this dynamic; maintaining high standards in public AI deployments could make such systems more trustworthy and preferable compared to those in the private sector.

 

Dr. Feng Naikang: (a) How crucial and sufficient are transparency and explainability in building public trust in AI systems? (b) What steps can organizations take to enhance these aspects within their AI implementations?

 

Dr. Claudio Novelli: Transparency and explainability are foundational to building public trust in AI systems. While clear and consistent communication from private organizations is essential, true public trust is cultivated through broader societal practices. For example, enhancing AI literacy, potentially beginning in primary education, is crucial for integrating AI into cultural norms and managing public expectations and concerns. Organizations should focus on making AI systems more transparent and explainable by prioritizing user-centric design and engaging in open dialogues about AI's limitations and risks.

 

Dr. Feng Naikang: When we talk about holding algorithms accountable, what do we really mean? Accountability of what (human or AI?), to what, via which tools, and for what goal?

 

Dr. Claudio Novelli: Accountability in the context of algorithms refers to various practices and procedures aimed at clarifying and distributing responsibilities within a socio-technical system. As detailed in our article—link.springer.com/article/10.1007/s00146-023-01635-y—different tools and goals are involved. In essence, accountability is about ensuring that responsibilities are clearly allocated across the socio-technical organization and that transparent procedures are in place to make this arrangement understandable to external observers.

 

Dr. Feng Naikang: The EU is known for leading the current AI governance agenda. Can you give us a general picture of (a) how the EU's AI Act ensures accountability for AI systems? (b) What can global governments learn from the EU's best or failed practices in designing effective accountability regimes?

 

Dr. Claudio Novelli: This question may not be addressed in a few lines. The EU’s AI Act (AIA) introduces a comprehensive set of measures and obligations for AI providers and deployers, many of which aim to raise accountability standards. These include rules on data quality, recordkeeping, and requirements for deployers to maintain internal risk management systems. However, these rules' concrete interpretation and implementation remain to be seen, particularly as specific standards will be established through EU implementing acts in the coming months. Global governments can learn from the EU's experience that technological shifts can profoundly impact regulatory frameworks (e.g., the case of LLMs). One key lesson is the importance of flexibility—perhaps favoring minimalist regulations—over adding complexity to an already intricate legal landscape. Another anticipated lesson from the EU's experience is that creating an excessive number of national or supranational authorities to implement and enforce the AIA may be ineffective and even counterproductive.

 

Dr. Feng Naikang: As a legal scholar interested in the legal-tech interface, (a) how do you think emerging technologies will transform both legal philosophy and practice? (b) What training is essential to prepare future legal professionals for the integration of AI in their field?

 

Dr. Claudio Novelli: Emerging technologies like AI impact legal philosophy in ways similar to their impact on political and social philosophy. The unique nature of certain human tasks in legal practice is being challenged, which may lead to reforming some legal concepts and institutions. For instance, what does maintaining an internal perspective in legal practice mean? Can AI ensure fair and impartial judgments in judicial proceedings? Moreover, digital technologies and AI offer new avenues for testing philosophical assumptions, thereby providing a more nuanced understanding of complex legal notions. To prepare future legal professionals for AI integration, it is crucial to provide training in computer science and statistics, equipping them with the technical skills needed to navigate and leverage AI tools effectively.

 

Dr. Feng Naikang: How important is an interdisciplinary approach in addressing the legal and ethical implications of AI? What insights can cognitive sciences provide in understanding and developing AI systems?

 

Dr. Claudio Novelli: There is an inherent tension in the demand for interdisciplinary approaches: as societies grow more complex, achieving strong interdisciplinary expertise becomes increasingly challenging, yet it is also more necessary than ever. Researchers need to be well-versed in multiple fields, but even more crucial is the ability to work within interdisciplinary research teams, which universities should actively promote. Unfortunately, this is not the norm in many academic systems, such as in Italy. Individual researchers should develop enough expertise in adjacent fields to facilitate meaningful collaboration with colleagues from diverse backgrounds. Addressing the second question correctly would require expertise beyond my current scope.

 

Dr. Feng Naikang: How do you envision AI influencing governance and democratic processes, such as the use of machine learning in the organization of political parties?

 

Dr. Claudio Novelli: The influence of AI on governance and democratic processes, particularly in the organization of political parties, is a topic of growing interest. Recent events, such as using AI to create fake images for political propaganda, illustrate the potential risks. However, AI also offers opportunities to enhance the internal democracy of political parties by promoting transparency and inclusivity in internal debates and decision-making processes. It could also improve the party's responsiveness to its electorate by enabling more efficient and convenient use of data from social media platforms. Ultimately, whether AI positively or negatively impacts political parties depends on the intention and incentives of the party that uses it. It is crucial to foster a political culture that expects responsible AI use, incentivizing parties to employ AI in ways that persuade and inform rather than deceive.

47 views

Kommentare


bottom of page