In May 2024, the Saint Pierre International Security Center (SPCIS) launched a series titled “Global Tech Policy at the Forefront: Conversations with Leading Experts.” This initiative explores the impact of emerging technologies like Artificial Intelligence (AI), blockchain, biometrics, and robotics on global governance and transnational public policy. Featuring prominent figures from academia and industry, the interviews are published on SPCIS’s platforms, including WeChat, their website, and LinkedIn.
On December 23th, SPCIS had the privilege of hosting Dr. Syed AbuMusab, a Postdoctoral Researcher at Yale University’s Digital Ethics Center and the University of Bologna’s Legal Studies Department. Dr. AbuMusab’s interdisciplinary research spans social agency, the ethics of AI, and the philosophical dimensions of technology, action, and societal values. During the interview, he shared profound insights into the ethical and societal challenges posed by AI, including its disruptive impact on the workforce, the integration of large language models (LLMs) in education, and the global implications of AI governance. This article highlights the key themes from the discussion, which have been reviewed and authorized by Dr. AbuMusab for publication.
Syed is a Postdoctoral Associate at Yale University’s Digital Ethics Center and the University of Bologna's legal studies department, specializing in the social dimensions of artificial intelligence and emerging technologies. His research spans the Philosophy and Ethics of Technology, Data Science, Tech Policy, and the Philosophy of Mind and Action, with a focus on AI’s social ontology, agential capacities, and ethical implications. He explores how systems like Large Language Models transform education, the judicial system, and interpersonal relationships, while also incorporating feminist and religious philosophy to address AI’s societal challenges from diverse perspectives. His work emphasizes aligning AI with human values globally, drawing on Eastern philosophical traditions. Beyond academia, Syed is passionate about hiking, wildlife advocacy, and integrating animal welfare into the study of technology’s societal impact.
SPCIS: Syed, thanks for joining me today! You've done fascinating work at the intersection of AI, ethics, and philosophy. What initially sparked your interest in exploring the ethical and philosophical aspects of emerging technologies like AI? And what part of this multidisciplinary journey excites you the most?
Dr. Syed AbuMusab:My interest in the Philosophy of AI began during my first philosophy of mind course about a decade ago. While my initial focus was primarily on theoretical aspects, my perspective expanded significantly when I started collaborating with John Symons at the University of Kansas, drawing me deeper into AI development's ethical implications. This field particularly excites me because it naturally bridges multiple disciplines. I find this cross-pollination of ideas intellectually stimulating and crucial for the responsible development of AI technology. The field's inherent interdisciplinary nature serves dual purposes – it enriches philosophical discourse while helping ensure AI systems are developed with careful consideration of their societal impact.
SPCIS: In your article, you discuss AI's disruptive impact on the workforce and how it might replace jobs across social classes. How do you see AI reshaping the workforce, changing the skills people will need, and affecting the way wealth and opportunities are distributed in society?
Dr. Syed AbuMusab: The relationship between technology and labor has always been complex and transformative. AI follows this historical pattern but with two distinctive characteristics: its unprecedented pace of change and its unique impact on professional sectors. While previous technological shifts primarily affected manufacturing and manual labor, AI notably disrupts knowledge-based and professional occupations. This shift in impact across social classes is, I believe, partly why we're seeing heightened attention and concern about AI's effects on employment.
I think it's particularly telling that this wave of technological change has reached into traditionally insulated professional spheres. As I noted in my paper, the fact that academic and professional classes now feel directly affected by automation has amplified the discourse around AI's impact – though similar concerns weren't as prominently discussed when technological changes primarily affected manufacturing and service sectors. This asymmetry in response reveals something important about how we as a society perceive and respond to technological disruption based on which social classes are most affected.
SPCIS: There's a lot of debate around whether large language models (LLMs) should be integrated into classrooms or avoided. What do you think would be a balanced approach that allows LLMs to be useful learning tools while still fostering core educational goals, like critical thinking and independent thought?
Dr. Syed AbuMusab: Integrating LLMs into education is no longer a hypothetical scenario - these tools are already being used by students and professionals daily. I remain agnostic about their optimal level of classroom integration. In our diverse society, the best path forward is letting educational experts develop policies that adapt to different demographics and local needs rather than imposing one-size-fits-all solutions. This process is already happening across institutions, with educators working alongside developers, psychologists, sociologists, and philosophers to develop evidence-based approaches - some moving faster than others.
SPCIS: You've mentioned that LLMs could be seen as social agents. Could you explain why and how you envision a more nuanced, multi-dimensional approach to social agency, and how this could help us integrate AI into daily life in a more inclusive way?
Dr. Syed AbuMusab: AI systems are actively shaping and participating in our social world, and as philosophers, our role is to provide frameworks for understanding these novel entities. Drawing on ontological approaches from scholars like Dee Payton and Brian Epstein, I've argued that there's a compelling case for viewing AI systems as social agents. But this isn't just a philosophical exercise - the question of AI agency has practical implications, particularly in legal contexts. How we conceptualize these systems - autonomous agents or tools like hammers - fundamentally shapes how we attribute responsibility when things go wrong. This becomes crucial when determining liability and accountability for AI-caused harm in our society.
SPCIS: When it comes to implementing computing systems, what do you see as the biggest weaknesses in current theories, like those proposed by Primiero or Turner? How might using a pluralistic approach offers a better solution to these challenges?
Dr. Syed AbuMusab: I don't think these theories are weak - they offer helpful perspectives on what it means to implement a computing system. My approach suggests that since implementation occurs across multiple levels of abstraction - from design to code to physical hardware - we might benefit from applying different implementation theories at each layer. This pluralistic framework allows us to leverage various theoretical strengths better to understand implementation at different levels of development.
SPCIS: Looking to the future, what measures do you believe are essential to ensure that AI benefits society as a whole, rather than deepening existing inequalities?
Dr. Syed AbuMusab: The challenge of ensuring equitable AI development is complex and perhaps more difficult than often acknowledged. Currently, AI development is concentrated in a handful of countries, giving them disproportionate influence over the technology's trajectory. This creates an inherent power imbalance in shaping AI's future. Moreover, we face a fundamental challenge – our world encompasses diverse and often contradictory value systems, making any single approach to value alignment problematic.
As representatives of countries driving these technological changes, we have a responsibility to work toward more inclusive development. While a globally coordinated effort that respects local value systems is ideal, we must be realistic about the practical challenges this entails. The gap between aspiration and implementation remains significant.
Comments