top of page
佳璇 刘

Emmie Hine: From Code to Ethics — Insights on AI Governance

Updated: Oct 9

In May 2024, the Saint Pierre International Security Center (SPCIS) launched the “Global Tech Policy at the Forefront” series, featuring conversations with leading experts on the impact of emerging technologies—such as AI, blockchain, biometrics, and robotics—on global governance and public policy. These interviews are shared across SPCIS platforms, including WeChat, the website, and LinkedIn.

 

On September 26th, we spoke with Emmie Hine, a research associate at the Yale Digital Ethics Center and a PhD candidate in Law, Science, and Technology at the University of Bologna. Emmie’s research focuses on the ethics and governance of emerging technologies, with an emphasis on XR technologies and their human rights implications. She holds a master’s degree in Social Science of the Internet from the University of Oxford, where her work explored AI governance in the U.S. and China. Her expertise also extends to the EU, where her team won the EU AI Act Grand Challenge at the University of St. Gallen. Emmie has published extensively on AI governance and policy across Europe, the U.S., and China, and writes the Ethical Reckoner newsletter. Before her PhD, she worked as a full-stack software engineer and holds a bachelor’s degree in Computer Science and Chinese from Williams College.

 

This article highlights Emmie’s insights on the ethical challenges of emerging technologies, AI governance, and the future of global tech policy. All content has been reviewed and authorized by Emmie Hine.

 

Naikang: Emmie, it's great to have you here! To start, could you tell us about what led you to research ethics and governance of emerging technologies, and how has your background in software engineering and social science shaped your perspective?

 

Emmie: Thanks for having me! I took a winding path to get here. I majored in computer science and Chinese at Williams College, and for a long time those two academic interests felt completely disparate. When I spent a year abroad at Oxford, I discovered the AI governance research happening there, and it was a lightbulb moment. From that moment, I was determined to return to the Oxford Internet Institute for the MSc in Social Science of the Internet. When I went back, I started examining AI governance in the US and China and ended up publishing several papers on that topic. Though I’ve moved more into the social sciences, I still leverage my technical background to incorporate quantitative as well as qualitative analysis into my interdisciplinary research. In my PhD program and as a Research Associate at the Yale Digital Ethics Center, I’ve expanded my focus to look at other technologies, like extended reality and machine unlearning, and I’m excited to keep shining a light on topics in tech ethics and governance. I try to make my research accessible to non-academics, including through my newsletter, the Ethical Reckoner.

 

 

Naikang: How would you characterize the key features of AI policies in the U.S., China, and the EU, in aspects such as regulatory focus, tool choice, legislative agility, enforcement, and the roles of government and industry?

 

Emmie: Broadly, the EU has taken a horizontal approach, passing the AI Act as a single broad bill centered around fundamental rights protections. It establishes a risk-based framework that bans some AI applications and attaches requirements of various stringency to others. China is taking a vertical approach: they’ve passed laws governing recommendation algorithms, deepfakes (or “deep synthesis” content), and generative AI, and are drafting laws on facial recognition and AI-generated content. The US’s approach can be generously called “fragmented.” As a result of the Biden administration’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, the US has quite a few rules on the use of AI in the federal government, but very few that affect the private sector. Now states are starting to act, which risks creating a splintered regulatory landscape. Many state laws focus on deepfakes and other specific issues, though some are trying to pass broader laws more akin to the AI Act.

 

Naikang: Given these differing approaches, how might a values-pluralistic human rights framework accommodate diverse AI governance models, and what challenges could arise in pursuing this approach?

 

Emmie: A values-pluralistic human rights framework recognizes and integrates diverse cultural, legal, and ethical values while still maintaining a baseline respect for core human rights principles. We’re never going to get the entire world to agree on exactly how AI should be used; for example, different countries have different ideas of what privacy means, so surveillance that’s acceptable in, say, China would not be in the US. Instead, we need flexible normative standards that different countries can adapt to their contexts, based on human rights treaties that are almost universally endorsed. Rather than agreeing on exact definitions of the values AI should promote, this establishes a floor that can then facilitate further dialogue on best practices. However, values and interpretations of human rights obviously vary significantly between countries, and so even this may prove too difficult.

 

Naikang: In what ways can machine unlearning go beyond the "right to be forgotten" to be integrated with other ethical strategies for trustworthy AI?

 

Emmie: The right to be forgotten is important for trustworthy AI because it gives people control over their data and assurance that their data has been wiped from a model—compared to filtering, which is vulnerable to jailbreaking—but it’s not the be-all and end-all. Machine unlearning can make models more fair and less damaging to the environment, improve robustness against adversarial attacks, and enhance transparency. It also helps ensure that developers are accountable to regulators and the people by making sure that companies are proactively complying with data quality and fairness standards and giving people a redress mechanism to exercise their rights. However, it’s vital that machine unlearning itself be implemented in a trustworthy way.

 

 

Naikang: What are the key safety and privacy risks of immersive extended reality (IXR), and why is regulating the experiences, rather than the technologies, important for improving regulatory effectiveness as XR evolves?

 

Emmie: IXR technologies force us to broaden the conversation around digital safety and privacy. Often the conversation around online safety focuses on mental health risks, and IXR can absolutely impact mental health, but also carries risks to physical health and social stability. Similarly, discussions around digital privacy often center around data protection, but IXR shows that we have to broaden that conversation to include decisional privacy (the right to make decisions without interference) and local privacy (the right to have a space where you can exist without observation). Regulating experiences over technology is crucial because tech advances quickly, so tying a policy to a specific technology (like the AI Act originally trying to list every technique underlying AI) dooms it to outdate. Focusing regulation on experiences and impacts on people also lets us center fundamental rights. All in all, regulating experiences instead of technologies helps us create future-proof regulations that safeguard individual rights and focus on the human impact of technology.

 

Naikang: How do you perceive U.S.-China geopolitical tensions impacting international AI governance, and what strategies do you recommend to mitigate these tensions and foster collaboration?

 

Emmie: Tensions between the US and China are high and unlikely to diminish any time soon, regardless of who is elected in November, because anti-China sentiment is one of the few areas of bipartisan agreement. However, dialogue, not decoupling, is crucial to mitigating tensions. A second Trump administration would likely cut the US’s few remaining ties to China, including ending visas for Chinese AI researchers, which would limit the US’s talent pool and further ratchet up tensions. I believe that low-level dialogue is better than high-level platitudes to build agreement, and I hope that the developing conversation around global AI governance will prioritize venues for discussion with fewer eyes and correspondingly less political pressure.

50 views

ความคิดเห็น


bottom of page