MSc-IT Study Material
June 2010 Edition

Computer Science Department, University of Cape Town
| MIT Notes Home | Edition Home |

Are Computer Ethical Issues Unique?

There have been many arguments that have been put forward to answer the question of whether or not computer ethical issues are unique? The answer to the question will imply a different way in which these issues can be dealt with. If they are not unique, an effective solution can be derived or adapted from what existing guidelines. If they are unique then a completely new way of dealing with them may have to be derived. Of course, there are also suggestions that the answer to the above questions is not as clear-cut. The different answers (and the reasons) includes:

While there are many answers to the question, it is clear that when an ethical issue arise, part of it may be analogous to existing framework, while part of it may be entirely new. It is the role of the policymakers to consider this question thoroughly before deciding on a solution. If the issues in question has an appropriate analogy, it could be employed as a starting point.

What Make Computer Ethics Different?

Moor (1985) claims that computer ethics is not like any other; it is described as a new area of ethics and as a unique kind of it. The arguments for such are based on the logical malleability of computers, the computer's impact on society and the invisibility factor.

The logical malleability of computers

Moor (1985) argues that what is revolutionary about computers is logical malleability. Computers are viewed as being logically malleable in that they can be shaped and molded to do any activity that can be characterised in terms of inputs, outputs and connecting logical operations. The logic of computers can be shaped in infinite ways through changes in hardware and software.

'Just, as the power of a steam engine was the raw resource of the Industrial Revolution so the logic of a computer is a raw resource of the Information Revolution. Because the logic applies everywhere, the potential applications of computer technology appear limitless. The computer is the nearest thing we have to a universal tool. Indeed, the limits of computers are largely the limits of our own creativity.'

Moor defines the driving question of the Information Revolution as 'How can we mould the logic of computers to better serve our purposes?'

The computer's impact on society

As computer technology encompass more and more of our society, Moor sees more and more of the transforming effect of computers on our basic institutions and practices. Although nobody can know for sure how our computerised society will look fifty years from now, Moor argues that it is reasonable to think that various aspects of our daily work will be transformed.

'Computers have been used for years by businesses to expedite routine work, such as calculating payrolls. However, as personal computers become widespread and allow executives to work at home, and as robots do more and more factory work, the emerging question will not be merely "How well do computers help us work?" but "What is the nature of this work?"'

The invisibility factor

An important fact about computers is that most of the time and under most conditions computer operations are invisible. Moor (1985) mentions three kinds of invisibility that can have ethical significance. The first variety of the invisibility factor is invisible abuse. James Moor defines invisible abuse as "the intentional use of the invisible operations of a computer to engage in unethical conduct." Moor suggests that a classic example of this is the case of a programmer who realised that he could steal excess interest from a bank.

'When interest on a bank account is calculated, there is often a fraction of a cent left over after rounding off. This programmer instructed a computer to deposit these fractions of a cent to his own account.'

Although Moor views this as an ordinary case of stealing, he sees this pertaining to computer ethics because computer technology has provided the opportunity for such activities to go more often than not unnoticed. Another possibility for invisible abuse is invasion of the property and privacy of others. For example, Moor identifies how a computer can be programmed to contact another computer over phone lines and 'Surreptitiously remove or alter confidential information'.

Another example of invisible abuse is the use of computers for surveillance. Classic examples of such use are computerised employee monitoring and Closed Circuit Television (CCTV) technologies.

The second variety of the invisibility factor is the presence of invisible programming values, those values that are embedded into a computer program. Moor draws an analogy between writing a computer program and building a house.

'No matter how detailed the specifications may be, a builder must make numerous decisions about matters not specified in order to construct the house. Different houses are compatible with a given set of specifications. Similarly, a request for a computer program is made at a level of abstraction usually far removed from the details of the actual programming language. In order to implement a program, which satisfies the specification, a programmer makes some value judgements about what is important and what is not. These values become embedded in the final product and may be invisible to someone who runs the program.'

The third and final variety of the invisibility factor is the invisible complex calculation. Moor argues that "Computers today are capable of enormous calculations beyond human comprehension. Even if a program is understood, it does not follow that the respective calculations are understood. Computers today perform and, certainly, super computers in the future will perform, calculations, which are too complex for human inspection and understanding."

Moor argues that the issue is how much we should trust a computer's invisible calculation. This becomes a significant issue as the consequences grow in importance. He illustrates this with an example:

'Computers are used by the military in making decisions about launching nuclear weapons. On the one hand, computers are fallible and there may not be time to confirm their assessment of the situation. On the other hand, making decisions about launching nuclear weapons without using computers may be even more fallible and more dangerous. What should be our policy about trusting invisible calculations?'

Similarities of Computer Ethics to Other Ethics

The computer ethicist Gotterbarn in (Johnson, 1995) argues that the issues invoked by computers are not new or unique. Historically, there are many devices that have had a significant impact on society. Gotterbarn cites the example of the printing press, for which he argues that We did not develop a new or unique ethics called printing press ethics'.

He further argues that the flexibility of the computer is due to the 'Underlying strengths of the logical and mathematical capabilities implemented in the computer. The underlying flexibility of math and logic is greater than that of the computer, but we did not develop logic ethics and mathematics ethics.'

The newness claim, argues Gotterbarn, "Leads people to think that computer ethics has not yet found its primary ethical standard; so the discussion of computer ethics is not yet directed by any 'guiding principle' from which we can reason. This is different from our understanding of the older more established professions. Medicine, for example, is viewed as having a primary ethical principle - prevent death - which physicians can use to guide their reasoning."

Journalism is viewed as having a primary ethical principle - report the truth - which journalists can use to guide their reasoning. The inference from the newness claim is 'That we cannot make ethical decisions in computer ethics because we have not yet found a primary ethical principal.'

Gotterbarn argues that the uniqueness claim is even more dangerous: 'It leads one to think that not only are the ethical standards undiscovered, but the model of ethical reasoning itself is yet to be discovered; that is, even if we find a primary principle, we will not know how to reason from it.'

Gotterbarn concludes that 'We have mistakenly understood computer ethics as different from other professional ethics. When we look at medical ethics, legal ethics, journalistic ethics, we have to distinguish the practitioners of those ethics from the ethical principles they affirm. The three professionals work in different contexts: medicine, law and journalism. However, when we talk of their professional ethics we do not consider them three different kinds. The distinguishing characteristics among professional ethics, is the context in which they are applied. Because there are three contexts, it does not follow that there are three distinct sets of ethical rules or three different kinds of moral reasoning. Nor does it follow that computer ethics is another unique set of ethical principles which is yet to be discovered.'

Spinello (1995) is another computer ethicist who argues that the issues invoked by computers are not new or unique. He states that it would be a mistake 'To consider the ethics of computer technology as unique, separate from general business and social ethics.' His premise is that these 'Revolutionary problems can be confronted with the same analytical tools and ethical categories used for more traditional concerns. It will be illuminating to regard these new dilemmas from the perspective of rights or duties or maximisation of consequences.' He argues that our 'ethical tradition' is rich enough to provide ample background for the thoughtful and comprehensive treatment of these new problem areas. However, it may be necessary to 'Revise our definition of certain rights such as privacy in light of the new realities created by the phenomenon of digital disclosure. Although we need to reinterpret what the right to privacy means on the frontiers of cyberspace, it is important to underline that the notion of a right to privacy, a right to control of information about oneself, has not lost its intelligibility.'