StMU Research Scholars

Featuring Scholarly Research, Writing, and Media at St. Mary's University
November 7, 2024

Ethical Considerations in the Use of AI in Genomic Medicine

The advancement of artificial intelligence (AI) with genomic medicine is one of most progressive developments in medicine and healthcare. The notion of personalized medicine has become a new topic of interest, by finding innovative ways to diagnose, manage, and treat patients. This new application of AI in medicine allows researchers and healthcare professionals in identifying new patterns and mutations that can be overlooked by the human eye, which in turn enhances how precise treatments can be performed as well as improve patient health outcomes. While there may be many benefits, as AI becomes more integrated into healthcare practices, it can bring up many ethical concerns. Important principles within public health like patient autonomy, privacy, bias, and data security can create controversy with the involvement of AI. Ethical concerns can arise from the perspectives of multiple stakeholders, including patients, healthcare providers, technologists, and researchers, regardless of the benefits.

Understanding the ethical concerns of AI begins with understanding what it is and how it is involved with genomic medicine. “In genomic medicine, it primarily refers to the use of machine learning to analyze genetic information for diagnosis and treatment.”1 AI is used to go through vast amounts of genomic data to make predictions about disease susceptibility, the best treatment pathways, and patient outcomes. These capabilities are increasingly seen in personalized medicine, tailoring therapies to each individual based on their genetic data and composition. AI’s capacity to handle data at rapid rate allows for significant breakthroughs, especially in oncology, where “defining cancer types and subtypes based on the individual’s genetics”2 plays a critical role in deciding treatment success.

However, as AI becomes more integrated into medicine, it raises various ethical concerns, such as, how can we ensure that patient autonomy is respected when choices are made by AI’s algorithm rather than a human? How safe is the sensitive genetic data being analyzed? Is there any assurance of security at all? How can we prevent the AI systems from affecting the already existing prejudices in healthcare? As we continue to integrate the constantly advancing AI into healthcare and medicine, we see impacts on the relationship between patients and healthcare professionals. These evolving challenges must be acknowledged to guarantee the wellbeing of the patient.


Patient Autonomy and Informed Consent

“One of the primary ethical concerns in the use of AI within genomic medicine today, is maintaining patients’ autonomy, or there right to make informed decisions about their own healthcare.”3 “A key aspect of healthcare and medical ethics is the importance of informed consent, which ensures that patients fully understand what is involved in their care. Such as, risk, benefits, and alternatives that are involved in the treatments, in order to allow the patient to make well-informed decisions about their care.”4 With AI’s involvement in decision making, a lack of patient education may arise from issues of transparency when it comes to the usage of AI.

A more specific example of this issue can be demonstrated with the elderly populations, a large demographic within the healthcare system. Elderly patients may not necessarily be as “tech savvy” as middle aged or young adult patients, and the concept of having a computer, or AI in this case, making decisions for their livelihood may be a hard concept to grasp. “If developed inappropriately, AI applications may contribute to the ‘digital divide’ of technological access, availability and efficacy between age groups.”5 However, with the right approach, AI has the potential to bridge these gaps and ensure that all patients, regardless of age, can benefit from its advancements. When a patient does not understand the treatments that are necessary, a typical response is hesitance, due to the fact that a foundation of trust if not formed. “Poor design of an AI tool for older people may lead to the ‘dehumanization’ of care.”6 Physicians must have the ability to balance transparency of the usage of AI as well as properly educating the patient the benefits that can come from it to limit the ethical dilemmas that different stakeholders may have.

This brings out the issue of whether the usage of AI dehumanizes both the healthcare and genomic medicine setting. Patients may have the perception that if their care is being handled by a machine instead of a human, they are not being given the importance that they deserve. Physicians may also unknowingly play a role in this stigma of dehumanization by simply over relying on algorithms or any other AI support that is used when seeing the enhanced diagnostics that it can offer. A physician must not add on to the stigma by actively using their medical expertise and the AI’s support to effectively respect the patient and avoid ethical dilemmas.

CRISPR-Cas9 gene-editing process | Wikimedia Commons

A new technology called CRISPR-Cas9 has become a trending topic within the genomic medicine community. “CRISPR can create changes that correct genetic mutations that may be a cause for a large variety of conditions, from hereditary diseases to even cancers.”7 This new technology brings with it many advanced treatments, but also some concerns regarding dehumanization of medicine. All these possible advanced treatments are done while heavily relying on the usage of AI with genomic data. This can lead to worry that the human element of patient care, an essential aspect of healthcare, may be lost.  Additionally, “the ability to edit genes raises questions about the control on human evolution and the potential for “designer babies,” which continues to widen the gap between the human aspects of medical care and the algorithm-driven decisions in genomic medicine.”8 One may even argue “are we as humans trying to play God?”


Privacy and Data Security

Another ethical concern is potential breaches of data. “Genomic data is vast and sensitive; it contains information not only about an individual’s health but also can contain information about familial relationships and disease predispositions”.9 Unauthorized access to this information can lead to discrimination in areas like insurance or employment. The issue becomes more complex during the question of, who owns the data generated by AI analysis of a patient’s genome? Should anyone be allowed to own private data of patients? Can this data be shared with third parties and under what circumstances? What happens to the data after the patient has obtained their treatment? How compliant is AI with HIPAA guidelines? The challenge with AI is its requirement for a large, high quality, dataset to effectively operate with accuracy. This raises questions about the collection, storage, and utilization of data, particularly in the healthcare system, leading to worries about patient privacy and data security. Not to forget, AI algorithms are often biased, causing further concern.

The ethical challenges in managing vast amounts of sensitive genomic information in an era of advancing AI and genomic medicine. | AI Generated

AI algorithms tend to be biased presenting an additional ethical risk. What does it mean for AI algorithms (a computer) to be biased? Most AI algorithms are trained on a specific subset of data that it has in its database, and if these datasets are not diverse, they can unintentionally widen existing healthcare disparities. To put this into perspective, AI systems may usually train on data from certain racial or socioeconomic groups which can lead to less effective diagnosis or treatments for patients from underrepresented populations. This bias can emphasize the already existing inequalities in healthcare and result in poorer outcomes of marginalized populations. Bias is a concept that falls under prejudice, which is entirely prohibited in the medical field to prevent malpractice against any patient. A healthcare provider must put any personal beliefs aside to fully assure the patient gets equal treatment. Can AI be unbiased in its algorithmic decisions when trained on specific data?

A recent event occurred with 23andMe, a company known for taking your genomic data and calculating to estimate your ancestry. The company’s stocks are at an all-time low, the entire board of directors quit, and the company is considering being put up for sale. The Atlantic highlights concern about the situation currently going on in the company: “This raises significant privacy and data security issues, particularly because the DNA data is not protected under HIPAA.”10

The question now is what is going to happen to the DNA of the millions of customers 23andMe had? Customers’ genetic information could be sold to third parties like insurers or researchers without adequate safeguards. This situation is an example of a broader concern in genomic medicine of how AI or other technologies manage sensitive data, stressing the need for policy privacy protections in healthcare.


AI in Healthcare Access and Inequality

While there are many benefits that do come from AI’s involvement in a patient’s treatment, AI could unintentionally have an effect in further marginalization of the inequalities that already exist in healthcare. To put this into perspective, one will see wealthier institutions and regions more likely to have access to the advancing and expensive, technology. This disparity can be observed when comparing these wealthy institutions to poorer institutions that will find it harder to afford these technologies. When widening the already existing gap in healthcare quality between the opposite sides of the financial scale, how AI is integrated into healthcare may require a wide variety of policy changes. Updates to Medicaid or other public health services, to address the unequal distribution of these advanced technologies, or creating standards for AI that must be followed across all healthcare facilities are possible ways where equity and equality are still ensured within both sides of the financial scale.

The U.S Equal Employment Opportunity Commission (EEOC), which enforces laws against genetic information discrimination in employment | Wikimedia Commons

Another issue also comes in when insurance companies are involved. Since AI relies on vast amounts of patient data in different databases, ensuring the security of this information is crucial. There are concerns over who would have access to these databases, and many questions arise from it. Should a policy be put into place regarding insurance companies having database access? The concern arises due to the fact that insurance companies could possibly exploit patient insurance rates based on genetic findings. For example, if there was a mark in the patient’s genome sequencing indicating that they were predisposed to a certain disease, could insurance companies raise premiums due to this? There are policies in place for this according to the U.S Equal Employment Opportunity Commission, “Under Title II of Gina, it is illegal to discriminate against employees or applicants because of their genetic information”11 which was an act that took effect in 2009. There is another act in place to also protect patients the, Patient Protection and Affordable Care Act (ACA) which states “prohibits the use of pre-existing conditions – such as heart disease or a cancer diagnosis – to deny, increase premiums or impose waiting periods for health insurance companies.”12. But since technology is advancing so rapidly, policies tend to become quickly outdated.


Physician Concerns

The advancement of AI in medicine raises important ethical questions about patient autonomy, data privacy, and potential biases in AI-driven healthcare decisions | AI Generated

Considering the perspective of healthcare providers, AI offers significant benefits in terms of efficiency and decision making between physicians and patients. “AI can make for informed decisions towards treatment options for their patients because AI is able to quickly analyze data and identify that the physician may overlook.”13 This can especially be demonstrated in environments like emergency medicine and oncology, where having to make quick and accurate decisions are crucial to the outcome of the patient.

There are also concerns though that AI could lead to an over-reliance on automated systems. An issue is that physicians may begin to trust AI recommendations without fully understanding how the algorithms came to their conclusion, leading to potential errors in diagnosis. For example, if physicians may use AI to identify drugs and calculate a personalized dosage to help the patient. AI’s algorithms are prone to error and can make mistakes, leading to a miscalculation of medicine dosage. If the physician’s over reliance leads to not understanding how AI came to its conclusion, it could lead the patient’s health being harmed. Some physicians have even expressed their concerns on the transparency and reliability of AI systems, especially when they are used to make critical decisions about patient care. “At the end of the day the physician should make the final decision in a patient’s treatment and diagnosis, while AI should only play a role in aiding the physician to make decisions”.14 Trust in AI system remains a significant barrier as it is being adopted in many healthcare practices.


Technologists and AI Developer’s View

A perspective that is not often considered is the technologist and the developers who create the AI systems. “AI developers and technologist view AI as a tool designed to aid, rather than replace, human healthcare providers”.15 Technologist and developers argue their beliefs that AI can assist in making decisions more efficiently, by looking at all the data, even data that may be unintentionally overlooked by the human eye.  However, the physician must always make the final choice. For example, “The International Business Machines Corporation (IBM) is working on creating Explainable AI (XAI) which is an initiative aims to make AI systems to be more transparent by showing how AI systems arrive at its recommendations and decisions.”16 Through this there are ways provided to work towards transparency allowing for more trust to be built among healthcare providers and patients. AI developers are also addressing the issue of bias in AI systems. Larger and more various datasets would enhance the system’s ability to assist with healthcare decisions. Technologist work to develop and improve and the diversity of these datasets that are used to train AI systems, in order to reduce the risk of bias and make AI applications more reliable across different patient populations.


Conclusion

            AI in healthcare in recent years has presented itself to be a powerful tool providing a lot of benefits and discoveries that can be used to advance genomic medicine and improve patient care. Its ability to analyze large genomic datasets along with offering personalized treatment options is very promising, especially in areas like cancer treatment. It is important to continue to discuss the ethical dilemmas that surround the usage of AI in healthcare, ensuring that the rapidly changing technology is used with responsibility. Healthcare principles such as patient autonomy, privacy, bias, and access to care must be prioritized in any discussion about AI in genomic medicine to make sure no ethical boundaries are crossed. With careful oversight and collaboration between health professionals, technologists, and policymakers, AI can become an unstoppable force in medicine without compromising ethical standards.

  1. Artificial intelligence and machine learning in precision and genomic medicine—PMC. (n.d.). Retrieved, from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9198206/
  2. Cancer genome research and precision medicine—NCI. (n.d.). Retrieved, from https://www.cancer.gov/ccg/research/cancer-genomics-overview
  3. Patients’ perspectives related to ethical issues and risks in precision medicine: A systematic review—PMC. (n.d.). Retrieved October 3, 2024, from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10310545/
  4. Informed Consent—StatPearls—NCBI Bookshelf. (n.d.). Retrieved, from https://www.ncbi.nlm.nih.gov/books/NBK430827/
  5.   New Horizons in artificial intelligence in the healthcare of older people—PMC. (n.d.). Retrieved, from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10733173/
  6. New Horizons in artificial intelligence in the healthcare of older people—PMC. (n.d.). Retrieved from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10733173/
  7. Integration of Artificial Intelligence and CRISPR/Cas9 System for Vaccine Design—PMC. (n.d.). Retrieved from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9703516/
  8. “CRISPR babies”: What does this mean for science and Canada? – PMC. (n.d.). Retrieved, from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6342697/
  9. Patients’ perspectives related to ethical issues and risks in precision medicine: A systematic review—PMC. (n.d.). Retrieved, from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10310545/
  10. Remember That DNA You Gave 23andMe? – The Atlantic. (n.d.). Retrieved, from https://www.theatlantic.com/health/archive/2024/09/23andme-dna-data-privacy-sale/680057/
  11. Genetic Information Discrimination | U.S. Equal Employment Opportunity Commission. (n.d.). Retrieved from https://www.eeoc.gov/genetic-information-discrimination
  12.   Affordable Care Act (ACA) Pre-existing Conditions. (n.d.). Retrieved, from https://www.facingourrisk.org/privacy-policy-legal/laws-protections/ACA/pre-existing-conditions
  13. Artificial intelligence and machine learning in precision and genomic medicine—PMC. (n.d.). Retrieved, from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9198206/
  14. Can Artificial Intelligence Replace the Unique Nursing Role? – PMC. (n.d.). Retrieved, from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10811613/
  15. Can Artificial Intelligence Replace the Unique Nursing Role? – PMC. (n.d.). Retrieved, from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10811613/
  16. What is Explainable AI (XAI)? | IBM. (n.d.). Retrieved, from https://www.ibm.com/topics/explainable-ai

Van Nguyen

Hello, my name is Van. I am pursuing a B.S. in Bioinformatics with a minor in Computer Science and Biomedical Research at St. Mary's University. After completing my degree, I hope to matriculate at a medical school to become a doctor.

Author Portfolio Page

Recent Comments

1 comment

  • Abdullah Shahul Hameed

    Really insightful read! I appreciate how you broke down the ethical concerns around AI in genomic medicine, especially on data privacy and consent. It’s a complex area, and you made the issues clear and relevant—important stuff for the future of medicine!

Leave your comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.