Three things digital ethics can learn from medical ethics

Ethical codes, ethics committees, and respect for autonomy have been key to the development of medical ethics —elements that digital ethics would do well to emulate.
Three things digital ethics can learn from medical ethics

Share this post

Choose a social network to share with, or copy the shortened URL to share elsewhere

This is a representation of how your post may appear on social media. The actual post will vary between social networks

The need for better digital ethics has become more palpable and urgent with every new tech scandal on the news. However, despite there being concern about ethics and efforts towards developing more ethical practices in tech, those endeavours have been largely unsuccessful, with critics arguing that ethics is being used mostly as a selling point. How can we make sure tech companies take ethics seriously, and not just as a marketing tool? What should digital ethics look like? 

In this paper I argue that digital ethics has much to learn from medical ethics—a branch of practical ethics that has a longer history of implementing ethical practices in the real world. Medical ethics developed as a result of shocking scandals, such as the Tuskegee experiment, and new technology that created ethical challenges, such as medical ventilators. The tech landscape is looking quite similar, with scandals such as the Cambridge Analytica one, and technological advancements such as smartphones and machine learning that are facing us with new ethical conundrums. 

Three elements have been vital to the success of medical ethics that digital ethics would do well to emulate. First, it is urgent to develop international ethics codes analogous to the Belmont Report, the Declaration of Helsinki, and other guidelines that are upheld worldwide despite them not being legally binding. Second, tech projects should be approved and supervised by ethics committees. And not just any kind of ethics committees—boards that have the public interest as their main concern, and are competent, diverse, inclusive, and independent. Third, tech should respect the autonomy of individuals. Users have had enough of tech’s impositions. Netizens should not be forced to accept tech companies’ always-changing terms of service agreements. We should be offered options. Technology should be on our side, helping us fulfil our life objectives and desires, rather than being developed to exploit our data for the benefit of industry’s bottom line.  

One of the criticisms that have been levelled against digital ethics is that it does not have teeth. But ethics can and should have teeth. One of the ingredients that can help empower ethics is having professional licenses for tech workers—computer scientists, engineers, data analysts, etc. If doctors and architects need licenses to do their jobs, why shouldn’t the architects of the digital be held to the same high standards?  

Big tech has abused its power, and it has been negligent in protecting users’ rights. Digital ethics can help us right those wrongs, and prevent future violations of rights.


Read the paper:

Please sign in or register for FREE

If you are a registered user on Research Communities by Springer Nature, please sign in