• A
  • A
  • A
  • ABC
  • ABC
  • ABC
  • А
  • А
  • А
  • А
  • А
Regular version of the site

A New Tool Designed to Assess AI Ethics in Medicine Developed at HSE University

A New Tool Designed to Assess AI Ethics in Medicine Developed at HSE University

© iStock

A team of researchers at the HSE AI Research Centre has created an index to evaluate the ethical standards of artificial intelligence (AI) systems used in medicine. This tool is designed to minimise potential risks and promote safer development and implementation of AI technologies in medical practice.  

The rapid expansion of AI technologies across various areas of life, including medicine, has brought about new risks that go beyond information security, economic, or social concerns and extend into ethical challenges as well. Current standards and regulatory frameworks do not adequately address ethical considerations, making it essential to develop a specialised tool for evaluating AI systems from an ethical standpoint.

The team behind the 'Ethical Review in the Field of AI' project at the HSE AI Research Centre carried out comprehensive work in two phases: theoretical and practical. First, the researchers conducted a thorough review of numerous domestic and international documents to identify and define the key principles of professional medical ethics: autonomy, beneficence, justice, non-maleficence, and due care. Subsequently, a qualitative field study using in-depth semi-structured interviews was conducted among medical professionals and AI developers, which allowed the team to refine and actualise the initial principles and to add new ones.

Based on the study results, the researchers developed the Index of AI Systems Ethics in Medicine —a chatbot that allows for 24/7 self-assessment and provides instant feedback from the index developers. The assessment methodology includes a test with closed-ended questions designed to evaluate the awareness of both medical AI developers and AI system operators regarding the ethical risks associated with the development, implementation, and use of AI systems for medical purposes. 

The new methodology has been piloted and endorsed by a number of leading IT companies, such as MeDiCase and Globus IT, specialising in the development of AI solutions for medicine. Additionally, it has been approved by the Commission for the Implementation of the AI Ethics Code and the Moscow City Scientific Society of General Practitioners.

'The development of this index is a significant step toward ensuring ethical use of AI in medicine. We hope that the solution we have developed will be valuable to the medical community, which, as our research shows, is concerned about the potential negative ethical consequences of the widespread integration of AI into medical practice,' according to Anastasia Ugleva, Project Head, Professor, Deputy Director of the Centre for Transfer and Management of Socio-Economic Information at HSE University.

The index is expected to be sought after by ethics committees, the forensic medical expert community, and other organisations responsible for evaluating and certifying AI. It will also support the shift from the principle of sole responsibility of a medical professional to a model of shared responsibility among all participants in the process. The introduction of this index will help make AI usage safer and more aligned with high ethical standards.

The methodology guidelines The Index of AI Systems Ethics in Medicine are registered at HSE University as intellectual property (IP), No.8.0176-2023. 

See also:

Analysing Genetic Information Can Help Prevent Complications after Myocardial Infarction

Researchers at HSE University have developed a machine learning (ML) model capable of predicting the risk of complications—major adverse cardiac events—in patients following a myocardial infarction. For the first time, the model incorporates genetic data, enabling a more accurate assessment of the risk of long-term complications. The study has been published in Frontiers in Medicine.

HSE Researchers Develop Novel Approach to Evaluating AI Applications in Education

Researchers at HSE University have proposed a novel approach to assessing AI's competency in educational settings. The approach is grounded in psychometric principles and has been empirically tested using the GPT-4 model. This marks the first step in evaluating the true readiness of generative models to serve as assistants for teachers or students. The results have been published in arXiv.

Smoking Habit Affects Response to False Feedback

A team of scientists at HSE University, in collaboration with the Institute of Higher Nervous Activity and Neurophysiology of the Russian Academy of Sciences, studied how people respond to deception when under stress and cognitive load. The study revealed that smoking habits interfere with performance on cognitive tasks involving memory and attention and impairs a person’s ability to detect deception. The study findings have been published in Frontiers in Neuroscience.

‘Philosophy Is Thinking Outside the Box’

In October 2024, Louis Vervoort, Associate Professor at the School of Philosophy and Cultural Studies of the Faculty of Humanities presented his report ‘Gettier's Problem and Quine's Epistemic Holism: A Unified Account’ at the Formal Philosophy seminar, which covered one of the basic problems of contemporary epistemology. What are the limitations of physics as a science? What are the dangers of AI? How to survive the Russian cold? Louis Vervoort discussed these and many other questions in his interview with the HSE News Service.

Russian Physicists Determine Indices Enabling Prediction of Laser Behaviour

Russian scientists, including researchers at HSE University, examined the features of fibre laser generation and identified universal critical indices for calculating their characteristics and operating regimes. The study findings will help predict and optimise laser parameters for high-speed communication systems, spectroscopy, and other areas of optical technology. The paper has been published in Optics & Laser Technology.

Children with Autism Process Auditory Information Differently

A team of scientists, including researchers from the HSE Centre for Language and Brain, examined specific aspects of auditory perception in children with autism. The scientists observed atypical alpha rhythm activity both during sound perception and at rest. This suggests that these children experience abnormalities in the early stages of sound processing in the brain's auditory cortex. Over time, these abnormalities can result in language difficulties. The study findings have been published in Brain Structure and Function.

HSE Scientists Propose AI-Driven Solutions for Medical Applications

Artificial intelligence will not replace medical professionals but can serve as an excellent assistant to them. Healthcare requires advanced technologies capable of rapidly analysing and monitoring patients' conditions. HSE scientists have integrated AI in preoperative planning and postoperative outcome evaluation for spinal surgery and developed an automated intelligent system to assess the biomechanics of the arms and legs.

Smartphones Not Used for Digital Learning among Russian School Students

Despite the widespread use of smartphones, teachers have not fully integrated them into the teaching and learning process, including for developing students' digital skills. Irina Dvoretskaya, Research Fellow at the HSE Institute of Education, has examined the patterns of mobile device use for learning among students in grades 9 to 11.

Working while Studying Can Increase Salary and Chances of Success

Research shows that working while studying increases the likelihood of employment after graduation by 19% and boosts salary by 14%. One in two students has worked for at least a month while studying full time. The greatest benefits come from being employed during the final years of study, when students have the opportunity to begin working in their chosen field. These findings come from a team of authors at the HSE Faculty of Economic Sciences.

HSE University and Sber Researchers to Make AI More Empathetic

Researchers at the HSE AI Research Centre and Sber AI Lab have developed a special system that, using large language models, will make artificial intelligence (AI) more emotional when communicating with a person. Multi-agent models, which are gaining popularity, will be engaged in the synthesis of AI emotions. The article on this conducted research was published as part of the International Joint Conference on Artificial Intelligence (IJCAI) 2024.