top of page

First, Do No Harm: Medical Ethic’s Favour to The Modern Digital Age

As stories of Facebook’s hand in inciting genocide in Myanmar and attempted Russian manipulation of the 2016 US election have surfaced in recent years, it is clear that the digital world’s ethical guidelines are not up to snuff. With the tumultuous relationship between big tech’s ethics councils–or a lack thereof–and aphorisms of “do no evil,”1 tech giants’ comparatively newly formed ethical mandates could use guidance from a long-standing ethical system; medicine. The medical maxim of “do no harm” dates back to Hippocrates, the father of Western medicine, and his appeals to physicians for ethical treatment that still hold strong today.

However, medical ethics was once in a very similar situation that digital ethics is today. Physicians were left up to their own ethical decision making to avoid harming participants, and it was not until the 1970s when the field of medical ethics was fully developed to ensure individuals’ protection due to the many studies, namely the Stanford Prison Experiment, the Aversion Project, and Project MKUltra, came into the light. However, the story of formal medical ethics begins after World War II and the experiments that occurred, for instance, the Nazi Artificial Insemination Experiments, High Altitude Experiments, and Dachau Hypothermia Experiments. These experiments shaped the foundation as we know it for current ethical codes and conduct practices.

“History doesn't repeat itself, but it often rhymes” - Mark Twain.

This cycle of self-exploration and self-regulation leading to large-scale scandals can be seen replayed in the tech industry through recent events in the news, including “the Blackout challenge”2 and other harmful social media trends, shadow banning, and attempted Russian manipulation of the 2016 US election. Tech Titans, namely FaceBook and Google, actively live as though we are in a libertarian, post-nationalist world with little regard for the harm caused by their actions, and they are being called out for their actions as the world becomes more aware of the shallow attempts to create more ethical technology. A prime example of this is Google's attempt to create an AI ethics board that ended in the dissolution one week after its formation.3 While partial credit should go to Google for the formation of such a group, the application of such an outside governing body requires much more thought to their governing codes, power to act as the hammer for said ethical codes, and the amount of time that must be given to this issue.

Following our medical ethical outline for the next steps, a wave of ethical codes is created after the public outcry begins. Events, namely World War II and the illegal human experimentation that occurred during that time, precipitated the creation of the Nuremberg Code’s ten principles for human experimentation and the Declaration of Geneva’s reinvention of the Hippocratic Oath. However, this was just the beginning; after the Tuskegee Syphilis Study came to light through the Associated Press reporter Jean Heller’s article on the American medical system in 1972,4 the three fundamental ethical principles were formed through the Belmont Report;5 the respect for persons, justice and beneficence. These codes shaped what is now the in-depth practices and ethical standards used by medical professionals and social scientists.

Governmental guidelines for digital technology have slowly been shaped through expert groups such as the European Commission expert group that created the draft Ethics Guidelines for Trustworthy Artificial Intelligence6 or the Association for Computing Machinery’s Code of Ethics and Professional Conduct.7 Nonetheless, ethical guidelines, for instance, those created by the European Commission or Association for Computing Machinery, lack the literal and figurative teeth to combat those juggernauts in the tech space that seek to disregard ethical convention. A solution to this could be through the application of boards for digital tech creators, practitioners and professionals to become accredited, much like a College of Physicians & Surgeons, ensuring those in the technology field can be held to account in a professional standard. We have also seen the creation of more stringent standards that could set about the change in regulating the behaviour of tech giants through the 2018 adoption of the European General Data Protection Regulation,8 which has sought to protect the privacy and security of those in the European Union by doling out harsh fines for those who did not meet compliance.

One thing is clear, however; no matter how we go about creating a firm and encompassing ethical code for digital technology, it is desperately needed as a check and balance for technology to ensure that they are still asking the critical question of “should we do this?” and “is it best for humanity?” We don’t let children choose their dinner every day because it would be unhealthy; we don’t let teens use drugs or alcohol because it will damage their developing minds; so why are we allowing the tech giants to avoid the moral responsibility of giving us the tech that is harming us?


Megan (she/her) is an accomplished professional with a diverse background in hospitality, data management, criminal justice, and situational crime prevention. As Interim Lead for the Centre for Social Impact Technology, she utilizes systems thinking and multidisciplinary approaches to address complex problems. Megan’s expertise in public interest technology has been recognized, and she has published research on cult practices in social media and Indigenous parole conditions. With a BA (Hons) in Criminal Justice and numerous accolades, including the 2023 Centennial Gold Medal, Megan is actively involved in community initiatives, particularly focused on homelessness and vulnerable populations.


1. Cisco, Cisco Annual Internet Report (2018–2023), last modified March 9, 2020,

2. Cisco, Internet Report.

3. “COVID-19 Impact on Mining,” Mining Technology, accessed on May 17, 2022,

4. “Data Centres and Data Transmission Networks,” IEA, last modified November 2021,

5. Cristina Pozo-Gonzalo, “Demand for Rare-Earth Metals is Skyrocketing, so We’re Creating a Safer, Cleaner Way to Recover Them from Old Phones and Laptops,” The Conversation, last modified April 15, 2021.

6. Cristina Pozo-Gonzalo, “Demand for Rare-Earth Metals is Skyrocketing,”

7. “Climate Action Fast Facts,” United Nations, accessed May 17, 2022,

8. “Causes and Effects of Climate Change,” United Nations, accessed May 17, 2022,

9. “What is the Sixth Mass Extinction and What Can We Do About It?” WWF, last modified March 15, 2022,

2. Sarah Felbin, “What Is TikTok's 'Blackout Challenge' And Why Is It Dangerous?” Women's Health, December 23, 2021.

3. Kelsey Piper, “Exclusive: Google cancels AI ethics board in response to outcry,” Vox, April 4, 2019.

4. “Tuskegee Study - Timeline,” The U.S. Public Health Service Syphilis Study at Tuskegee, Centers for Disease Control and Prevention, accessed May 12, 2022.

5. Office of the Secretary, “The Belmont Report,” The National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research, April 18, 1979.

6. European Commission and Directorate-General for Communications Networks, “Ethics guidelines for trustworthy AI,” Publications Office of the European Union, 2019.

7. “ACM Code of Ethics and Professional Conduct,” Association for Computing Machinery, accessed May 12, 2022., “General Data Protection Regulation (GDPR),” Proton Technologies AG, accessed May 12, 2022.

8., “General Data Protection Regulation (GDPR),” Proton Technologies AG, accessed May 12, 2022.

1 view0 comments


bottom of page