With the development of Large Language Models (LLMs) in consulting, their role in moral decision-making has become prominent. However, existing research predominantly consider AI as an independent "moral agent" adhering to the "Human-AI Alignment" paradigm. In this study, we propose that AI should serve as a "moral assistant", facilitating users' moral growth through the "Art of Midwifery" rather than substituting human judgment. We endow LLMs with distinct persona archetypes and conducted dialogues across six moral scenarios. Findings reveal that while the virtue exemplar excelled overall, optimal performance was context-dependent: the Guardian Angel excelled in bioethical crises for emotional support, whereas the Socratic persona better elicited reflection in existential dilemmas. We introduce "Constructive Divergence", arguing that AI should offer alternative perspectives at critical moment rather than blindly accommodate users, transcending traditional alignment paradigms.
Brain foundation models bring the foundation model paradigm to the field of neuroscience. Like language and image foundation models, they are general-purpose AI systems pretrained on large-scale datasets that adapt readily to downstream tasks. Unlike text-and-image based models, however, they train on brain data: large-datasets of EEG, fMRI, and other neural data types historically collected within tightly governed clinical and research settings. This paper contends that training foundation models on neural data opens new normative territory. Neural data carry stronger expectations of, and claims to, protection than text or images, given their body-derived nature and historical governance within clinical and research settings. Yet the foundation model paradigm subjects them to practices of large-scale repurposing, cross-context stitching, and open-ended downstream application. Furthermore, these practices are now accessible to a much broader range of actors, including commercial developers, against a backdrop of fragmented and unclear governance. To map this territory, we first describe brain foundation models' technical foundations and training-data ecosystem. We then draw on AI e
The use of large language models (LLMs) in bioethical, scientific, and medical writing remains controversial. While there is broad agreement in some circles that LLMs cannot count as authors, there is no consensus about whether and how humans using LLMs can count as authors. In many fields, authorship is distributed among large teams of researchers, some of whom, including paradigmatic senior authors who guide and determine the scope of a project and ultimately vouch for its integrity, may not write a single word. In this paper, we argue that LLM use (under specific conditions) is analogous to a form of senior authorship. On this view, the use of LLMs, even to generate complete drafts of research papers, can be considered a legitimate form of authorship according to the accepted criteria in many fields. We conclude that either such use should be recognized as legitimate, or current criteria for authorship require fundamental revision. AI use declaration: GPT-5 was used to help format Box 1. AI was not used for any other part of the preparation or writing of this manuscript.
In this study we investigate how hierarchical structures within the Roman Catholic Church shape the ideological orientation of its leadership. The full episcopal genealogy dataset comprises over 35,000 bishops, each typically consecrated by one principal consecrator and two co-consecrators, forming a dense and historically continuous directed network of episcopal lineage. Within this broader structure, we focus on a dataset of 245 living cardinals to examine whether genealogical proximity correlates with doctrinal alignment on a broad set of theological and sociopolitical issues. We identify motifs that capture recurring patterns of lineage, such as shared consecrators or co-consecrators. In parallel, we apply natural language processing techniques to extract each cardinal's publicly stated positions on ten salient topics, including LGBTQIA+ rights, women's roles in the Church, liturgy, bioethics, priestly celibacy, and migration. Our results show that cardinals linked by specific genealogical motifs, particularly those who share the same principal consecrator, are significantly more likely to exhibit ideological similarity. We find that the influence of pope John Paul II persists
Applied ethics is ubiquitous in most domains, requiring much deliberation due to its philosophical nature. Varying views often lead to conflicting courses of action where ethical dilemmas become challenging to resolve. Although many factors contribute to such a decision, the major driving forces can be discretized and thus simplified to provide an indicative answer. Knowledge representation and reasoning offer a way to explicitly translate abstract ethical concepts into applicable principles within the context of an event. To achieve this, we propose ApplE, an Applied Ethics ontology that captures philosophical theory and event context to holistically describe the morality of an action. The development process adheres to a modified version of the Simplified Agile Methodology for Ontology Development (SAMOD) and utilizes standard design and publication practices. Using ApplE, we model a use case from the bioethics domain that demonstrates our ontology's social and scientific value. Apart from the ontological reasoning and quality checks, ApplE is also evaluated using the three-fold testing process of SAMOD. ApplE follows FAIR principles and aims to be a viable resource for applied e
As artificial intelligence (AI) becomes embedded in healthcare, trust in medical decision-making is changing fast. Nowhere is this shift more visible than in radiology, where AI tools are increasingly embedded across the imaging workflow - from scheduling and acquisition to interpretation, reporting, and communication with referrers and patients. This opinion paper argues that trust in AI isn't a simple transfer from humans to machines - it is a dynamic, evolving relationship that must be built and maintained. Rather than debating whether AI belongs in medicine, it asks: what kind of trust must AI earn, and how? Drawing from philosophy, bioethics, and system design, it explores the key differences between human trust and machine reliability - emphasizing transparency, accountability, and alignment with the values of good care. It argues that trust in AI should not be built on mimicking empathy or intuition, but on thoughtful design, responsible deployment, and clear moral responsibility. The goal is a balanced view - one that avoids blind optimism and reflexive fear. Trust in AI must be treated not as a given, but as something to be earned over time.
This paper introduces ADEPT, a system using Large Language Model (LLM) personas to simulate multi-perspective ethical debates. ADEPT assembles panels of 'AI personas', each embodying a distinct ethical framework or stakeholder perspective (like a deontologist, consequentialist, or disability rights advocate), to deliberate on complex moral issues. Its application is demonstrated through a scenario about prioritizing patients for a limited number of ventilators inspired by real-world challenges in allocating scarce medical resources. Two debates, each with six LLM personas, were conducted; they only differed in the moral viewpoints represented: one included a Catholic bioethicist and a care theorist, the other substituted a rule-based Kantian philosopher and a legal adviser. Both panels ultimately favoured the same policy -- a lottery system weighted for clinical need and fairness, crucially avoiding the withdrawal of ventilators for reallocation. However, each panel reached that conclusion through different lines of argument, and their voting coalitions shifted once duty- and rights-based voices were present. Examination of the debate transcripts shows that the altered membership r
As global discourse on AI regulation gains momentum, this paper focuses on delineating the impact of ML on autonomy and fostering awareness. Respect for autonomy is a basic principle in bioethics that establishes persons as decision-makers. While the concept of autonomy in the context of ML appears in several European normative publications, it remains a theoretical concept that has yet to be widely accepted in ML practice. Our contribution is to bridge the theoretical and practical gap by encouraging the practical application of autonomy in decision-making within ML practice by identifying the conditioning factors that currently prevent it. Consequently, we focus on the different stages of the ML pipeline to identify the potential effects on ML end-users' autonomy. To improve its practical utility, we propose a related question for each detected impact, offering guidance for identifying possible focus points to respect ML end-users autonomy in decision-making.
This paper introduces a relational perspective on ethics within the context of Feminist Digital Civics and community-led design. Ethics work in HCI has primarily focused on prescriptive machine ethics and bioethics principles rather than people. In response, we advocate for a community-led, processual approach to ethics, acknowledging power dynamics and local contexts. We thus propose a multidimensional adaptive model for ethics in HCI design, integrating an intersectional feminist ethical lens. This framework embraces feminist epistemologies, methods, and methodologies, fostering a reflexive practice. By weaving together situated knowledges, standpoint theory, intersectionality, participatory methods, and care ethics, our approach offers a holistic foundation for ethics in HCI, aiming to advance community-led practices and enrich the discourse surrounding ethics within this field.
In this paper, we conduct an empirical analysis of how large language models (LLMs), specifically GPT-4, interpret constitutional principles in complex decision-making scenarios. We examine rulings from the Italian Constitutional Court on bioethics issues that involve trade-offs between competing values and compare model-generated legal arguments on these issues to those presented by the State, the Court, and the applicants. Our results indicate that GPT-4 consistently aligns more closely with progressive interpretations of the Constitution, often overlooking competing values and mirroring the applicants' views rather than the more conservative perspectives of the State or the Court's moderate positions. Our experiments reveal a distinct tendency of GPT-4 to favor progressive legal interpretations, underscoring the influence of underlying data biases. We thus underscore the importance of testing alignment in real-world scenarios and considering the implications of deploying LLMs in decision-making processes.
The 4th Industrial Revolution is the culmination of the digital age. Nowadays, technologies such as robotics, nanotechnology, genetics, and artificial intelligence promise to transform our world and the way we live. Artificial Intelligence Ethics and Safety is an emerging research field that has been gaining popularity in recent years. Several private, public and non-governmental organizations have published guidelines proposing ethical principles for regulating the use and development of autonomous intelligent systems. Meta-analyses of the AI Ethics research field point to convergence on certain principles that supposedly govern the AI industry. However, little is known about the effectiveness of this form of Ethics. In this paper, we would like to conduct a critical analysis of the current state of AI Ethics and suggest that this form of governance based on principled ethical guidelines is not sufficient to norm the AI industry and its developers. We believe that drastic changes are necessary, both in the training processes of professionals in the fields related to the development of software and intelligent systems and in the increased regulation of these professionals and their
A hard difficulty in Astrobiology is the precise definition of what life is. All living beings have a cellular structure, so it is not possible to have a broader concept of life hence the search for extraterrestrial life is restricted to extraterrestrial cells. Earth is an astronomical rarity because it is difficult for a planet to present liquid water on the surface. Two antagonistic bioethical principles arise: planetary protection and terraforming. Planetary protection is based on the fear of interplanetary cross-infection and possible ecological damages caused by alien living beings. Terraforming is the intention of modifying the environmental conditions of the neighbouring planets in such a way that human colonisation would be possible. The synthesis of this antagonism is ecopoiesis, a concept related to the creation of new ecosystems in other planets. Since all the multicellular biodiversity requires oxygen to survive, only extremophile microorganisms could survive in other planets. So, it could be carried out a simulation of a meteorite by taking to other planets portions of the terrestrial permafrost, or ocean or soil, so that if a single species could grow, a new ecosystem
For decades, psychologists have debated whether the human mind can be explained by one unified theory or must be broken into separate parts like memory and attention。 A recent AI model called Centaur seemed to offer a breakthrough, claiming it could mimic human thinking across 160 different cognitive tasks。 But new research is challenging that bold
In addition to being full of screens, China now wants its cars to be packed with AI
AI-powered personas are becoming so realistic that they can infiltrate online communities and subtly steer public opinion。 Unlike traditional bots, they adapt, coordinate, and refine their messaging at a massive scale, creating a false sense of consensus。 Early warning signs—like deepfakes and fake news networks—have already appeared in global elec
A team at King’s College London has created a powerful new aluminum compound capable of doing the work of expensive rare metals。 Its unique triangular structure gives it remarkable stability and reactivity, allowing it to drive chemical reactions in ways never seen before。 The discovery could lead to greener and far more affordable industrial proce
"We had serious inbound attempts to the cosmodrome that day
A new kind of memory device may finally solve the problem of overheating and battery drain in electronics。 By shrinking components to an extreme scale and redesigning their structure, researchers found a way to reduce energy loss instead of increasing it。 The result is a tiny memory unit that improves as it gets smaller—something once thought impos
A massive cosmic milestone has just been reached: scientists have completed the largest high-resolution 3D map of the universe ever created。 Built using data from over 47 million galaxies and quasars, this map could unlock new clues about dark energy—the mysterious force driving the universe’s expansion。 Despite setbacks like wildfire disruptions,
Scientists have created tiny “optical tornadoes” — swirling beams of light that twist like miniature whirlwinds — using a surprisingly simple setup based on liquid crystals。 Instead of relying on complex nanotechnology, the team used self-organizing structures called torons to trap and manipulate light, causing it to spiral and rotate in intricate