Annotated
list of background readings
for Social Issues and Professional Practice in Computing and IT
Dr. Maria Keet
Department of Computer
Science
University of Cape
Town
This document
lists several background sources to consult and analyse.
You are not expected to read all of them, but, at least, as a minimum:
- at
least one of both Section 1 and 2,
- several
entries from Section 3
(if you want to do well in the assignment),
- one
or more from Section 4
(if youÕre lost on how this is really relevant to computing or are in need
of motivation to do this part of CSC1016S), and
- the book chapter listed in Section 8 (broader tech issues,
including engineering, and socio-economic aspects).
Most references
are annotated, and for many of them, questions are added to assist you in
reflecting on the content, which are indicated as such with a different
font type.
This is not meant to be a comprehensive, final-and-only, list, and
you are encouraged to look up more sources. The ones listed in this reader are
a good starting point to get to grips with social
issues and professional practice for computing and IT. Many of the sources
listed contain further links to related material that you may wish to explore.
One
note of caution: only English-language resources are listed, which are rather
biased, meaning that they are heavily USA/UK-framed. Most of these articles presuppose
certain premises that
are not always formulated and they ignore others, and some SIPP-relevant topics
that are not an issue in the Anglo-Saxon/English language sphere are in other countries
where other languages are spoken (and those documents were thus not included)
or where certain societal issues are [more/less] prevalent.
Table of contents
1. (Computer) ethics, moral responsibility........................................................................................................................................... 2
2. Modelling, design................................................................................................................................................................................................ 2
3. Big data..................................................................................................................................................................................................................... 3
4. Some IT applications with issues............................................................................................................................................................ 5
5. Privacy...................................................................................................................................................................................................................... 8
6. Open Source Software, Free Software etc.......................................................................................................................................... 9
7. ICT for Development and ICT for Peace.............................................................................................................................................. 9
8. Other....................................................................................................................................................................................................................... 10
Gotterbarn,
D. Computer Ethics:
Responsibility Regained. National
Forum: The Phi Beta Kappa Journal, 1991, 71: 26–31.
- More generic and really about ethics
in computing as a profession, rather than conflating it with general
ethics w.r.t. crimes where computers are just
tangentially involved (as in MoorÕs paper; see below). Recommended
to read.
Moor, J.H. What
is Computer Ethics? Metaphilosophy,
1985, 16(4):266-275.
- Widely cited as a
defining paper in computer ethics, despite some pitfalls that are
discussed (and solved) in GotterbarnÕs paper.
Metz, T. Ubuntu as a moral theory
and human rights in South Africa. African
Human Rights Law Journal, 2011, 11:532-559.
- On a ÔpublicÕ or ÔgroupÕ morality. ItÕs a bit long-ish and not a core part of computer ethics, yet
included here because but it provides ample arguments that the concept of ubuntu can be used in moral deliberations. Further
below in the ÔBig DataÕ section, someone argues that the notion of
individual morality and moral agency is too limited. Could this perhaps be
used to develop a morality for Big Data?
Noorman,
M. Computing
and Moral Responsibility. The Stanford Encyclopedia of Philosophy (Summer 2014 Edition),
Edward N. Zalta (ed.).
- Prioritise reading to the
introduction, and sections 1 and 3, then
optionally section 2. Section 1.1 describes in more detail the problem of
the many hands.
Bynum,
T. Computer
and Information Ethics. The Stanford Encyclopedia of Philosophy (Winter 2015 Edition),
Edward N. Zalta (ed.).
- There are many good
entries in the SEP; this is not one of them, and it can be safely skipped
for this course.
Informal:
Slashdot: https://slashdot.org/story/99/09/02/2038236/review-code-of-ethics-for-programmers
Wikipedia: https://en.wikipedia.org/wiki/Computer_ethics
2. Modelling,
design
Derman, E. Apologia
Pro Vita Sua. The Journal of Derivatives,
2012, 20(1):35-37.
- Page 2 has five items for
a ÒModelers Hippocratic OathÓ in analogy to the one for medical doctors. Should computing have one,
too? If so, do those five statements in the paper apply to software design
as well, or should something be added / removed / reworded? If there should not be such an oath for software analysts and
developers, why not?
Jerven,
M. Studying
Africa by the numbers can be misleading. What can be done about it? The
Conversation, 20 July 2016.
- Related
to the next reference, but then about shortcomings in general data
collection, storage, and (computational) analysis in Africa. Is this an ethical problem
for an IT specialist, as s/he is giving limited raw data a veneer of
credibility once it has been processed in the computer? In case of mis-interpretation of said processed data, and
consequent flawed policy, is the IT specialist to blame, or those who used
the information to make decisions and policies based on it, someone else, or
no one?
Keet, C.M. Dirty
wars, databases, and indices. Peace & Conflict Review,
2009, 4(1):75-78.
- One easily can build
political bias into data collection, storage, and analysis, be this
intentionally or unintentionally. This paper gives some examples, and
criticizes the ÒDirty War IndexÓ tool that is built upon such databases
and aims at informing which side of a conflict deserves more
international assistance. That is, one can rig the software toward favouring oneÕs preferred ally, upfront. What is unethical to
politically motivated modelling, if anything?
The data analysis stage in software development (like creating UML
diagrams) is based on, or at least informed by, the customer wanting such
an application. Is customer king and you will just do as the customer
demands? Or maybe you would build the software according
to your political inclination anyway and make a separate user interface
for the customer with a different political agenda (the customer wouldnÕt
know anyway), risking a charge of insubordination along the way? In
the case of the latter, would it really be insubordination, as, after all,
the software meets the requirements and you have not been instructed to
not model it your way as well?
Tufekci,
Z. The
real bias built in at Facebook. New
York Times, 19 May, 2016.
- The bias is in the algorithms: ÒBut Òsurfaced by an algorithmÓ is not a defense of neutrality,
because algorithms arenÕt neutral. Algorithms are often presented as an
extension of natural sciences like physics or biology. While these
algorithms also use data, math and computation, they are a fountain of
bias and slants — of a new kind.Ó ÒWithout laws
of nature to anchor them, algorithms used in such subjective decision
making can never be truly neutral, objective or scientific.Ó ÒWhat we are shown is shaped by these algorithms, which are shaped
by what the companies want from us, and there is nothing neutral about
that.Ó Do
you agree with the author, that algorithms used in decision-making (be it
at Facebook, on Bing, at the census bureau for government policy etc.) are
never truly neutral, or are the claims in the article overstated and that
algorithms can be neutral if you want them to be independent? If so, how,
and is there a way to guarantee it so as to gain
a userÕs trust? If not, and if one wants to be honest, in what way, if at
all, can the non-neutrality be made clear to its users? Can you find out
whether it is legally, ethically, or morally acceptable to have non-neutral algorithms even if commissioned
by a non-aligned, ÔindependentÕ, organisation?
Note: while this section is also split into scientific
references and other sources, some of the other sources are actually gentle
introductions to a scientific paper that is referenced at the end of that
article.
Crișan,
C., Zbuchea, A., Moraru, S. Big Data: The
Beauty or the Beast. Strategica: Management, Finance, and Ethics, 2014,
p. 829-849.
- ÒOur paper aims at looking
at means and ways through which Big Data is being generated, to provide
examples of Big Data ownership and consequences derived from this, and to
illustrate the use of Big Data for improving the life of the societyÕs
members. We define the Big Data, how it is generated, processed and the
degrees of responsibility in maneuvering such precious resource.Ó and Òwe
discuss potential implications from the perspective of redefining what
personal and private still means when individual data becomes a commodity.Ó. Is
it ethical to relegate
individual data to a commodity? If not: why not? If so, is ÔfreeÕ usage of
a tool that uses your data or the harder to identify Ôbenefit of societyÕ enough
of a payment-in-kind by that company or should they also pay you for
providing your individual data? If the latter, then how should the price be set (like: how
much money should a company pay you for obtaining your home address)?
If you build a tool that requires a userÕs personal data, or someone else
did and youÕre the analyst, would you have any moral dilemmas in using the
data provided, be this for the purpose it was intended for or any possible
purpose?
Sax, M. Finders
keepers, losers weepers. Ethics Inf Technol, 2016, 18: 25-31.
- Òthe business model of big data companies
is essentially founded on a libertarian-inspired Ôfinders, keepersÕ ethic.
The article argues, next, that this presupposed Ôfinder, keepersÕ ethic is
far from unproblematic and relies itself on multiple unconvincing
assumptions. This leads to the conclusion that the conduct of companies
working with big data might lack ethical justification.Ó. Is
libertarianism or capitalism to blame for Big DataÕs dark side, and therefore
unethical or immoral? If so, what should be done about it?
Zwitter, A. Big
data ethics. Big Data & Society, 2014, 1-6.
- Has a clear, short, description of moral agency based on the
individual, which, Zwitter asserts, does not
really work in the Big Data era, where one doesnÕt know what the group
effects are of oneÕs individual clicks. Could ubuntu morality, which is geared toward
communities, fill this gap?
Staff. Big data for development. African Seer, 25 April 2014.
- Fine layperson introduction to big data. Although the title
mentions ÔdevelopmentÕ, that is not covered in this article.
Richards, N.M., King, J. Gigabytes
gone wild. Al Jazeera, 2 March, 2014.
o
This
is the ÔliteÕ version of their (longish) journal article on Big Data Ethics in
the Wake Forest Law Review journal. Some quotes: ÒBig data allows us to know more, to predict
and to influence others. This is its power, but itÕs also its danger.Ó ÒThe values we build or fail to build into our new digital structures
will define us.Ó ÒBig data has allowed the impossible to
become possible, and it has outpaced our legal systemÕs ability to control it.Ó ÒItÕs outrageous that while big data has allegedly eliminated privacy,
many of the ways itÕs used are themselves shrouded in secrecy. This has things
entirely the wrong way around.Ó
(cf. companies being transparent and the users keeping
their privacy). And the need for Òin-house panels that ensure that scientific tools are deployed ethically
and for the benefit of human beings.Ó
OÕNeil,
C. The
ethical Data Scientist. Slate.com, 4 February 2016.
o
Some quotes: ÒPeople have too
much trust in numbers to be intrinsically objective, even though it is in fact
only as good as the human processes that collected it.Ó ÒAnd since an algorithm cannot see the difference between patterns that
are based on injustice and patterns that are based on [network/usage] traffic,
choosing race as a characteristic in our model would have been unethical.Ó ÒBut what about using neighborhoods or ZIP codes? Given the level of
segregation we still see in New York City neighborhoods, thatÕs almost
tantamount to using race after all. In fact most data we collect has some proxy
power, and we are often unaware of it.Ó ÒWhat typically
happens, especially in a Òbig dataÓ situation, is that thereÕs no careful
curating of inputs. Instead, the whole kit and caboodle is thrown into an
algorithm and itÕs trusted to come up with an accurate, albeit inexplicable,
prediction.Ó. The paper has some useful points for
discussion, but then it ends with: ÒA data scientist doesnÕt have to be an expert on the social impact of
algorithms; instead, she should see herself as a facilitator of ethical
conversations and a translator of the resulting ethical decisions into formal
code. In other words, she wouldnÕt make all the ethical choices herself, but
rather raise the questions with a larger and hopefully receptive group.Ó Is that really the right
approach to the matter, relegating responsibilities to some amorphous vague
Ôlarger and hopefully receptive
groupÕ? Is it right to absolve the data scientist from any responsibility of her (in-)actions,
never virtuous and never to blame for unethical behaviour,
no matter how bad the consequences of some data crunching may be? DoesnÕt a
data scientist have moral agency so
s/he can be culpable or be exculpated?
Note: this list is ordered alphabetically, and I tried to limit the short articles to those published in
the past year, and still include a range of different current issues.
Bejoy, R. Apartheid
victims lose 14-year legal battle against Ford and IBM. GroundUp, 23 June 2016.
- ÒIBM, they say,
provided database and information storage services that allowed the
apartheid government to implement the race-based classification system.Ó Do you think
IBM is guilty, or else behaved unethically and therefore ought to
have been convicted of a crime (regardless what the law says)? Why? What would you do if you were working for a
company that is actively involved applying IT in this way in a country now
(say, in Markovia, where databases store all
information of citizens, so that the new dictator easily can find and ban all
Jews, gays, and communists into forced labour
camps)?
Boninger, F., Molnar, A. How
companies use school educational software to sell to children. TimesLive, 18
August 2016.
- The dark side of eLearning in the
classroom, with many links to sources to back up the authorsÕ claims. Note: this relates to Big Data,
the lack of privacy online, and the so-called Ôpolicy vacuumÕ. Closer
to home, CILT@UCT—in
charge of the lecture recordings—did look at incidence of viewing
lecture recordings, but only aggregated by recording and course. We (CS
Dept.) donÕt do anything with the data stored by the automarker,
other than the queries to calculate the points in the 1015 and 1016
challenges, and to get your highest mark to put in the gradebook.
Would you object to us using
that automarker data in other ways? For
instance, to investigate how your final grade relates to your assignment
submission history or find correlations between your assignment
submissions and prac tests?
What if the automarker was not an in-house tool, but sourced from a for-profit company
who would also get access to all this data, including the source code you
submitted? If youÕre ok with the former but not the latter, what are your
means to object to the latter, if any?
Epstein, R. How
Google could rig the 2016 election. Politico,
20 August 2015.
Gershgorn,
D. Police
used bomb disposal robot to kill a Dallas shooting suspect. Popular Science, 8 July,
2016.
- A bomb disposal robot, introduced
to save lives of bomb disposal experts, is ÔrepurposedÕ and used to get
closer to someone violating the law and to kill that person. Is repurposing—where
x was designed and approved for purpose y then being used for not explicitly approved purpose
z—acceptable in principle? Is this particular repurposing ethically fine,
i.e., where a machine introduced to save lives is being deployed to kill
people? How does that rhyme with AsimovÕs three laws of robotics (see also
article further below)?
Kwet, M. The
dangers of paperless classrooms. Mail & Guardian, 9
October 2015.
- Òthe software chosen for
schools has deep economic, political and social implicationsÓÉ as
well as the pervasive surveillance and 24/7 tracking of learners and
teachers. One can debate
whether there are educational benefits to IT-assisted learning, but for
the sake of argument, letÕs assume it is neutral. Then how should it be
implemented, if at all? For instance, is there a problem with 24/7
tracking of all learning activities, where each keystroke/tap/swipe is
recorded for further analysis, or should such event recording be disabled so as to give the learners a protected learning
environment? Is it acceptable to have cameras installed in the classroom
for it, as happens now in the pilot schools in Gauteng [http://ewn.co.za/2015/07/21/MEC-to-implement-Gauteng-paperless-classroom-today]?
Should the software running on the tablets be running Android, so that
Google can get its hands on the learnersÕ data, and, by extension, the US
government; if not, then what, MicrosoftÕs OS? Is it ethical to run such an experiment in Soweto (where it is
rolled out), whereas learners in model-C schools in Sandton
or Claremont are not subjected to this type of pervasive surveillance?
Murthy, M. Facebook
is misleading Indians with its full-page ads about free basics. The Wire, 26 December 2015.
- The article raises a range
of issues with Òfree basicsÓ, to which Indians stood up against; among
others the issue of net neutrality, ÒWhat Facebook wants is our less fortunate
brothers and sisters should be able to poke each other and play Candy
Crush, but not be able to look up a fact on Google, or learn something on
Khan Academy or sell their produce on a commodity market or even search
for a job on Naukri.Ó, and the recurring topic [see also Big Data] of ÒIf a product is free
then the user is the item being soldÓ. It does exist in several
countries in Africa without much discussion, including Cell CÕs free
basics in South Africa. More arguments are available elsewhere, e.g.,
focusing on retarding socio-economic development
rather than non-informed citizenry as a first-and-foremost problem. What is your opinion about
this? Are the arguments raised just as valid for South Africa (see also
the references to Africa halfway in the article)? Clearly, at the time of
writing it is legal to do this
in South Africa (that is: there is no law preventing it), but is it also ethical? If so, by which one of the
moral theories can it be ethical to do the Ôfree basicsÕ; if not, then why
not, and how can we go about changing the current status quo of laissez faire on this issue?
Pileggi, T. US
terror victim seeks $1 billion from Facebook for Hamas posts. The Times of Israel, 11 July 2016.
o
Their
argument: ÒÒFacebook has
knowingly provided material support and resources to Hamas in the form of
FacebookÕs online social media network platform and communication services,Ó a
press release issued by the plaintiffs said. ÒHamas has used and relied on
FacebookÕs online social network platform and communications services as among
its most important tools to facilitate and carry out its terrorist activity.ÓÓ Was this unacceptable
practice from Facebook? Is it the [legal or moral] responsibility of the owner
of the social network software to police what is, and is not, allowed to be
communicated through its software? If not, who is responsible, if anyone? And
if you would deem that Facebook would be complicit and culpable, could then not also, say, Egypt sue Facebook, for it was used to organise demonstrations during the Arab Spring? IsnÕt their
claim analogous to using a telephone company for providing the services, had
they communicated over that network, and thus that telephone companies could be
sued for such matters as well? Telephone companies are not responsible for what
their customers say during telephone conversations, so can one draw an analogy
and conclude that Facebook is not to blame, or is it different because software
is involved?
Anonymous. Do we
need AsimovÕs laws? MIT Technology
Review, 16 May 2014.
- A short
overview of the Arxiv paper with the same name
(at http://arxiv.org/abs/1405.0961).
The authorsÕ answer is ÒnoÓ, but there are issues to be resolved on the
legality and morality of the growing numbers of robots designed to kill humans (e.g.,
drones). Do you
agree with the authors that the fear of robots eradicating humankind in
the future is irrational, and hence these laws—donÕt harm humans,
obey orders, and protect yourself—do not have to be reworked in to the
law of a country, the AU, or the UN? Or, perhaps,
should AsimovÕs laws be made law in South Africa and around the world, so
that no robot will ever kill a human again? Would the three laws be enough, or is a Ôzeroth lawÕ on humanity an essential addition? Are
they ÔgoodÕ laws or do they have some ethical problems (see also, e.g., [http://www.cs.bham.ac.uk/research/projects/cogaff/misc/asimov-three-laws.html])?
This section lists only a few links that specifically focus
on privacy on its own. Some of the previous topics intersect with specific privacy
issues in a particular context, notably Big Data.
DeCew, J. Privacy.
The Stanford
Encyclopedia of Philosophy (Spring 2015 Edition), Edward N. Zalta (ed.).
- If you manage to read only
part of it, then read the introduction, section 1.1 and section 4.
Although there is no clear agreement of the definition, privacy has to do
with control over information about oneself. By that account, then all current social media
sites and apps and all websites not asking you to accept cookies are
violating your privacy, no? Is privacy a moral right? Is it a legal right in South Africa?
Isaacs, R., Deosaran, N. Crawford, K. Data protection in South
Africa: overview. Practical law.
- This is a 4-page digest of
South AfricaÕs data protection law that was accepted in November 2013. Recall pop quiz question 5
on health information of the Groote Schuur
Hospital and its use in research: according to this law, what should the
answer be? Do you know of any current violations? Is this law too lenient
or too draconian, or does it strike the right balance, and why? Is it
practically feasible to implement?
Government Gazette of
the RSA: Act no. 4 of 2013: Protection
of Personal Information Act 2013.
- This is the published Act
in full.
The
right to be forgotten. https://en.wikipedia.org/wiki/Right_to_be_forgotten
- This is a fairly new concept in the context of digital
information. While originating in the EU legislation, it has been adopted
in 2013 in South Africa as well. It means that in certain cases, some web
pages should be removed at least from the search results. For instance,
about allegations that were not substantiated, or, as was in the Costeja case against Google, there was an
article about bad debt that was subsequently paid, so Mr. Costeja wanted the article about his debts removed (Costeja won). Another example
is where once one has served a jail sentence, one
paid the price and should be allowed to move on, rather than being haunted
and defined by oneÕs past that is stored on the Web. The USA is not in favour of the right to be forgotten. What do you think about
this right, and why? If you think it is a good right, then can you specify
(or else find in the SA legislation) in which cases it should (does)
apply? Related to this, but for less damaging cases: would you possibly
make use of a Ôdigital footprint cleanupÕ service, or wonÕt you mind now
there may be ÔsillyÕ online comments you made 5 years ago. Should any of your
foolishness be stored for eternity and for all and sundry to see on the
Web?
Spyware,
or its euphemism Ônon-invasive computingÕ. URL: https://en.wikipedia.org/wiki/Spyware,
- The most common ones are
system monitors, Trojans, adware (advertising-supported software), and
cookies. This can be for both monitoring machine and internet
usage and for collecting other personal information, such as login
details.
More To Be Added
RT Spotlight interview with Richard Stallman, the free
software ÔevangelistÕ: https://www.youtube.com/watch?v=uFMMXRoSxnA
Free Software
Foundation: http://www.fsf.org/
GNU General Public License
(GPL) and information, including the ideas behind GNU: https://www.gnu.org/philosophy/philosophy.html
- ÒSpecifically, free software means users have the four essential freedoms:
(0) to run the program, (1) to study and change the program in source code
form, (2) to redistribute exact copies, and (3) to distribute modified
versions.Ó
Free Software, definition: https://www.gnu.org/philosophy/free-sw.html
See also the list of references in the SIPP05-Property
slides.
Anon. Kentaro Toyama: ten myths about technology and development.
FSI News, 25 February,
2010.
- This is a useful short
list of common wrong assumptions of the use of ICT to solve actual or
perceived problems in society, and for development in particular; e.g.,
the Òif you build it, they will comeÓ (they wonÕt necessarily) and Òpoor people have no
alternativesÓ (there very well may be non-tech routes to
achieve the same, and for free).
Stauffacher, D. Weekes, B., Gasser, U., Maclay, C., Best, M. (Eds.). Peacebuilding in
the Information Age
– sifting hype from reality. ICT 4 Peace Foundation. January 2011.
- The report has 2-4 page
experience reports, pointing out some challenges in disaster management
and peacekeeping and peace building operations. If you donÕt read them
all, then prioritise as follows: read the
section on ÒICT support to peace keepingÓ (pp20-22) and on ÒIntelligence
of the masses or stupidity of the herdÓ (pp23-25), and the Òcross-fertilisation of UN common operational datasets
and crisis mappingÓ (pp26-30) relates to the Ôdirty war indexÕ paper
listed above. A major issue used to be getting information at all, but now
more of an issue is to try to find the right
information. If an
app was built to summarise notifications, and it
has bugs (it doesn't summarise well, but, say,
only about 50% of the time it does it right), then is it ethical to use that app
nevertheless? Can any moral
responsibility be assigned for those cases it went wrong? If so, would
the app builder be responsible, or only the one who wrote the summarisation algorithm, or
the decision makers who decided to rely on the app nonetheless?
More To Be Added
Harvey,
D. Technology, work and human disposability. In: Seventeen
contradictions and the end of capitalism. London: Profile Books. pp111-121. (file david harvey
17 contradictions)
- Highly
recommended to read. The first 10 pages describe how technology,
and IT in particular, contributes to capital and the other 10 pages about
its contradictions. In short: if all the automatisation
in industry continues, it actually will destroy capitalism, for then it
cannot generate the surplus it gets from underpaying people (one cannot ÔunderpayÕ
a robot as it doesnÕt have a salary). It does not offer a solution. The topics covered in the
chapter also link to one of the quiz questions: what to do with the people
who will lose their jobs when more and more tasks are automated? Is it ethical to make people redundant
due to the software you developed, and are you morally obliged to find alternative gainful employment for the
people affected?
Richardson,
K. Sex robot matters
– Slavery, the prostituted, and the rights of machines. IEEE
Technology & Society magazine, June 2016.
- The article discusses whether
anthropomorphic robots should have rights. ÒExtending
rights to machines has the potential to reduce the idea of what it means
to be human, and to begin to redefine the human as an object, as Aristotle
thought of his slaves.Ó ÒOnly when confronted with another human can we experience our
humanity, our identity, and our mutuality as enshrined by the U.N.
Declaration of Human RightsÓ. The article advocates robots
should not have rights. What are the authorÕs arguments? Do you agree, or
are the counter-arguments more convincing?
Electronic Frontier Foundation—Ôdefending your rights in the digital worldÕ.
- There are many articles
and press releases on the site, ranging from student privacy, to codersÕ
rights, to patents, to https (cf. plain http), and more.
Toyama, K. Bursting the 9 Myths of
Computing Technology in Education. ICTworks, 28 January 2011.
- ICT in education can work
out well, but in many cases it doesnÕt. This list of ÒPro-technology
rhetoricÓ with explanations highlights flawed arguments in the debate
about the usefulness of ICT in education. You many want to apply critical reasoning to the
Òreality:Ó paragraphs in the article: are they indeed valid arguments?
IITPSA. Codes
of behaviour.
- This page lists both the
Code of Conduct and the Code of Practice.
The Moral Machine
crowdsourcing app from MIT.
- This relates back to the popquiz question on what the decision module in the
driverless car should be programmed to do, but then for many scenarios,
like choosing to kill either the cats or the dogs, either the doctor+dog or the mother+baby,
and so on.