Coding to Kill

30 Nov 2017

What does ethics mean to a software engineer?

I think that a simple definition of ethics in terms of software engineering can be summed as “don’t be a dick”. There are certain ethical guidelines that you can certainly invent on the spot if someone asked you what ethics means that can act as universal guidelines no matter the context of the ethics code. Some of these guidelines can include “do not intentionally or unintentionally mislead someone by lying or omission of information for personal gain” (deceitful acts), “public safety should be an absolute priority” (Hold paramount the safety, health and welfare of the public), “take care of conflicts of interest” (disclose all known or potential conflicts of interest).

Simple definitions like these are quite intuitive. The decision you make can be as simple as answering easy questions, “Does my previous employment with company X cause a conflict of interest with this job for my company Y?” -> “yes, I should inform my sups”. “Should I present my findings exactly as is, despite the outcome being sub-par to expectations?” -> “yeah, it would be wrong to alter information to save face”. Unfortunately, systems that can affect the livelihood and well-being of the public must be considered with 500% attention and care.

The self driving car

In this day and age, one of the largest and most concerning ethical considerations is that of the self driving car. The more self driving cars there are on the road, the more lives there are in direct control of the autonomous driving system. This does not only include the passenger of the vehicle; it includes the lives of nearby drivers and pedestrians. As a software engineer, who are you to say that the lives of pedestrians take precedence over the lives of the passengers, or vise-versa in an unavoidable accident? And you don’t make the decision and instead implement the system as imposed by someone else, is it ethical for you to build this system as specified?

Lose-lose situations like these have no trivial solution. Is the lesser of two (or more) “evils” good enough? What if there is no truly ethical solution?

Conclusion

In the case of self driving cars there is no obvious answer. However, if I must decide on what route should be taken now, I would have to go with the least evil of all evils. Whatever would cause the least overall harm is the action that should be taken in the case of an unavoidable accident.