NSU seminar examines ethical limits of Anthropic’s AI model Claude

The School of Humanities and Social Sciences (SHSS) at North South University (NSU) organised a faculty seminar on February 5 examining philosophical and ethical questions surrounding Anthropic’s AI model Claude.

Titled A Philosophical Critique of Anthropic’s “Constitution” for its AI Model “Claude”, the seminar brought together faculty members, academics, researchers, and students for discussion on artificial intelligence, ethics, and philosophy, says a press release. The session was moderated by Md. Mehedi Hasan, Senior Lecturer in the Department of English and Modern Languages at NSU.

The keynote presentation was delivered by Prof. Dr. Norman Kenneth Swazo, Director of the Office of Research and Professor of Philosophy in the Department of History and Philosophy at NSU. He examined Anthropic’s “constitutional” approach to AI governance, focusing on claims that Claude follows ethical principles related to safety, privacy, and accuracy.

Prof. Swazo questioned whether constitution-based governance can meaningfully support claims of ethical reasoning, moral judgment, or consciousness in artificial systems, particularly given their lack of lived experience and contextual awareness. He argued that while AI systems can simulate complex behaviour, this does not amount to genuine consciousness.

During the question-and-answer session, participants asked whether constitutionally guided AI systems could ever possess consciousness, given that they rely on pattern recognition rather than understanding. Responding, Prof. Swazo expressed skepticism, saying that simulation alone does not constitute consciousness. Citing neuroscientist Anil Seth, he said that “simulation is not instantiation.”

Other questions focused on the limits of machine learning compared with human moral fallibility and whether society places excessive ethical responsibility on AI systems. Addressing concerns about responsibility when individuals are influenced by AI-generated guidance, Prof. Swazo distinguished between AI-induced and AI-associated psychosis, suggesting that such cases are better examined through the lens of psychopathology rather than attributing actions directly to artificial systems. He said Claude is designed to follow broad ethical guidelines and operate within built-in constraints, rather than exercise moral autonomy.

Discussion also covered copyright and access to knowledge, including whether AI systems trained on large volumes of publicly available material might advantage students without access to certain academic resources. Prof. Swazo said Claude’s knowledge base includes millions of books and other legally available materials, while noting that Anthropic has settled lawsuits related to copyright violations. He added that ethical tensions remain in debates over access and fairness.

Another question addressed how the absence of consciousness in AI models could be assessed. Prof. Swazo said it is difficult to prove a negative, but argued that observable behaviour shows that the kind of “knowledge” held by systems like Claude is insufficient to replicate the full complexity of human thought.

Following the lecture, Md. Rizwanul Islam, Professor of Law and Dean of the School of Humanities and Social Sciences at NSU, said ethical frameworks embedded in AI systems may reflect cultural and ideological biases shaped by their socio-cultural origins. He stressed that the key issue is not only how such technologies are designed, but how societies critically respond to and engage with them.