Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Computing Ethics: Understanding Moral Issues in Technology, Exams of Humanities

An overview of ethical theories, specifically those relevant to computing, including deontological theories, utilitarianism, natural rights, situational ethics, and the relationship between laws and ethics. It also discusses the licensing of software engineers and the concept of ubiquitous computing, raising questions about accountability for actions taken by computers or computer systems.

Typology: Exams

Pre 2010

Uploaded on 08/16/2009

koofers-user-bx3
koofers-user-bx3 🇺🇸

10 documents

1 / 5

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
Computing Ethics
November 29, 2006
2
Ethics
Ethics is the study of what it means to “do
the right thing.” It is often equated with
moral philosophy because it concerns
how one arrives at specific moral choices.
Ethical theory posits that people are
rational, independent moral agents, and
that they make free choices.
Computer ethics is a branch of ethics that
specifically deals with moral issues in
computing.
3
Normative vs. Descriptive Ethics
Normative ethics: tell us what we should
do in making practical moral standards.
Descriptive ethics: focus on what people
actually believe to right or wrong, the
moral values (or ideals) they hold up to,
how they behave, and what ethical rules
guide their moral reasoning.
4
Deontological Theories
Deon meaning obligation in Greek
Ethical decisions should be made solely by
considering one's duties and the absolute
rights of others
The principal philosopher in this tradition is
Emmanuel Kant (1724-1804).
A key concept is the “categorical imperative” --
an absolute, unconditional requirement that
guides human act in all circumstances.
Act only according to that maxim by which you can
at the same time will that it would become a
universal law.
pf3
pf4
pf5

Partial preview of the text

Download Computing Ethics: Understanding Moral Issues in Technology and more Exams Humanities in PDF only on Docsity!

Computing Ethics

November 29, 2006

2

Ethics

  • Ethics is the study of what it means to “do

the right thing.” It is often equated with

moral philosophy because it concerns

how one arrives at specific moral choices.

  • Ethical theory posits that people are

rational, independent moral agents, and

that they make free choices.

  • Computer ethics is a branch of ethics that

specifically deals with moral issues in

computing.

3

Normative vs. Descriptive Ethics

  • Normative ethics : tell us what we should

do in making practical moral standards.

  • Descriptive ethics : focus on what people

actually believe to right or wrong, the

moral values (or ideals) they hold up to,

how they behave, and what ethical rules

guide their moral reasoning.

Deontological Theories

  • Deon meaning obligation in Greek
  • Ethical decisions should be made solely by considering one's duties and the absolute rights of others
  • The principal philosopher in this tradition is Emmanuel Kant (1724-1804).
  • A key concept is the “ categorical imperative ” -- an absolute, unconditional requirement that guides human act in all circumstances. - Act only according to that maxim by which you can at the same time will that it would become a universal law.

5 Deontological Views: Key Principles

  • The principle of universality: Rules of

behavior should be applied to everyone.

No exceptions.

  • Logic or reason determines rules of ethical

behavior.

  • Treat people as ends in themselves, but

not as means to ends.

  • Absolutism of ethical rules.
    • E.g., it is wrong to lie (no matter what!) 6 Utilitarianism
  • The founding father is John Stuart Mill (1806-1873)
  • An ethical act is one that maximizes the good for the greatest number of people.
  • The guiding principle is to increase happiness or “utility” (i.e., what satisfies one’s needs and values).
  • Consequences are quantifiable, and are the main basis of moral decisions.
  • An act is “right” if it tends to increase the aggregate utility of all affected people.
  • How can determine or measure possible consequences before an act is committed? 7 Two Types of Utilitarianism
  • Rule-utilitarianism: applies the utility principle to general ethical rules rather than to individual acts.
  • The rule that would yield the most happiness for the greatest number of people should be followed.
  • Act-utilitarianism: applies utilitarianism to individual acts. We must consider the possible consequences of all our possible actions, and then select the one that maximizes happiness to all people involved. Natural Rights
  • Natural rights are universal rights derived

from the law of nature (e.g., inherent rights

that people are born with).

  • Ethical behavior must respect a set of

fundamental rights of others. These

include the rights of life, liberty, and

property.

  • One of the founding fathers is John Locke

13 Licensing vs. Certifying

  • There is a difference between certifying

and licensing.

  • Licensing is typically practiced by the

government, and is a legal precondition to

entering a field. Licensing is mandatory.

  • Certification is done by the profession

itself, demonstrating that you have

mastered certain levels of skills in a

particular field. This is a voluntary

process.

14 Ubiquitous Computing (1)

  • Invisibility, integratedness and

embeddedness into a variety of real-life

situations

  • High degree of connectivity
  • Cheap and miniaturized
  • Applied to everything 15 Ubiquitous Computing (2)
  • Interplanetary networks?
  • Optical computing?
  • DNA computing?
  • Quantum transistors?
  • Wearable computing
  • As a result, the impact of computing on society is becoming particularly important.
  • A related question is: Who should be (legally and/or morally) accountable for an action taken by a computer or a computer system? Autonomous Moral Agents
  • The article by Stahl (2004).
  • Some computer systems are autonomous in the sense that they come to independent decisions based under specific circumstances.
  • Computers play a role in social interaction, which often displays a moral dimension. But are they autonomous moral agents?
  • Stahl said no, for the following reasons:
    • Computers process data (processed facts)
    • Human beings process information (data that is attached with a meaning)

Moral Alan Turing Test

  • Does a computer have intelligence (or consciousness)? Does it have a mind of its own?
  • Turing Test : Let a human judge interact with two parties in natural language, one being a human and the other being a machine. If the judge cannot reliably tell which is which, then the machine is said to pass the test.
  • Moral Turing Test : Let human interrogators engage in conversations with a computer system, and ask the system to make moral decisions. If the system passes the test, then we can assign the status of an autonomous moral agent to the system.
  • The question is: Most moral decisions are not a clear-cut yes/no. Then whose criteria should we adopt?