Ethical approaches and autonomous systems.

AI and Ethics.

Photo by Boris M on Unsplash

Autonomous machines are already a significant feature of modern society and are becoming ever more pervasive. Although they bring undoubted benefits, this is accompanied by concerns.

It is important that such machines function as a benign part of our society, and this means that they should behave in an ethical manner. Conventional machines are required to meet safety standards, but autonomous machines additionally need to come with some assurance that their decisions are not harmful.

In this article we will talk about the current ethical theories and briefly describe them. We will consider three types of ethical theory: consequentialist approaches, deontological approaches, and virtue ethics approaches. The intention is certainly not to argue that one approach is better than another as an account of human ethics, but rather to explore the effects of adopting the various approaches when implementing ethical agents.

Big Data Jobs

Defining an ethical agent

First, let us define what an ethical agent is. Our notion of an ethical agent is an agent that behaves in a manner which would be considered ethical in a human being. This may be considered to be in the spirit of weak AI, and adapts Minky’s definition of AI which relates to activity produced by a machine that would have been considered intelligent if produced by a human being [1]. The agents in our discussion will not be expected to model or perform ethical reasoning for themselves, but merely to behave ethically. The approaches to ethics we discuss will belong to the system designers, not the agents, and we will be exploring the consequences of the designers adopting a particular approach to ethics on the agents they implement. This may result in the implemented agents representing a rather thin version of the approach they embody.

By adopting this relatively weak notion of what it means for an agent be ethical, we are able to exclude from consideration anything dependent on mental states, or motivation, and questions such as whether the ethics developed by an agent, whose needs and form of life greatly differ from those of humans, would be acceptable to humans. We believe that using this notion is not a problem: on the contrary ethical agents are needed now, and while weakly ethical agents are currently feasible, strongly ethical agents currently lie in the, perhaps distant, future.

Consequentialism

This approach holds that the normative properties of an act depend only on the consequences of that act. Thus whether an act is considered morally right can be determined by examining the consequences of that act: either of the act itself (act utilitarianism, associated with Jeremy Bentham, [2]) or of the existence a general rule requiring acts of that kind (rule utilitarianism, often associated with John Stuart Mill [3]). This gives rise to the question of how the consequences are assessed. Both Bentham and Mill said it should be in terms of the greatest happiness of the greatest number, although Mill took a more refined view of what should count as happiness.** However, there are a number of problems associated with this notion, and many varieties of pluralistic consequentialism have been suggested as alternatives to hedonistic utilitarianism.

Trending AI Articles:

1. Microsoft Azure Machine Learning x Udacity — Lesson 4 Notes

2. Fundamentals of AI, ML and Deep Learning for Product Managers

3. Roadmap to Data Science

4. Work on Artificial Intelligence Projects

Equally there are problems associated with which consequences need to be considered: the consequences of an action are often not determinate, and may ramify far into an unforeseeable future. However, criticisms based on the impossible requirement to calculate all consequences of each act for every person, are based on a misunderstanding. The principle is not intended as a decision procedure but as a criterion for judging actions: Bentham wrote It is not to be expected that this process [the hedonic calculus] should be strictly pursued previously to every moral judgment. [4]. Despite the difficulty of determining all the actual consequences of an act (let alone a rule), there are usually good reasons to believe that an action (or rule) will increase or decrease general utility, and such reasons should guide the choices of a consequentialist agent. This has led some to distinguish between actualism and probabilism. On the latter view, actions are judged not against actual consequences, but against the expected consequences, given the probability of the various possible futures. Given our notion of a weakly ethical agent, the agent itself will be supplied with a utility function, and will choose actions that attempt to maximise it. The choice of the utility function, and the manner in which consequences are calculated will be the responsibility of the designer.

Deontological ethics

The key element of deontological ethics is that the moral worth of an action is judged by its conformity to a set of rules, irrespective of its consequences. One example is the ethical philosophy of Kant [5]; a more contemporary example is the work of Korsgaard [6]. The approach requires that it is possible to find a suitable, objective, set of moral principles. At the heart of Kant’s work is the categorical imperative, the concept that one must act only according to that precept which he or she would will to become a universal law, so that the rules themselves are grounded in reason alone. Another way of generating the norms is offered by Rawl’s Theory of Justice [7], in which the norms correspond to principle acceptable under a suitably described social contract. The principles advocated by Scanlon in [8] are those which no one could reasonably reject. Divine commands can offer another source of norms to believers. Given our weak notion of an ethical agent which requires only ethical behaviour, the rules to be followed will be chosen by the designer, and the agent itself will be a mere rule follower. Thus the agent itself will embody only a very unsophisticated part of deontology: any sophistication resides in the designer who develops the set of rules which the agent will follow.

Problems of deontological ethics include the possibility of normative conflicts (a problem much addressed in AI and Law, e.g. [9]) and the fact that obeying a rule can have clearly undesirable consequences. Many are the situations when it can be considered wrong to obey a law of the land, and it is not hard to envisage situations where there are arguments that it would be wrong to obey a moral law also. Some of this may be handled by exceptions (which may be seen as modifications which legitimize violation of the general rule in certain prescribed circumstances) to the rules. Exceptions abound in law and their representation has been much discussed in AI and Law: for example exceptions to the US 4th Amendment in, e.g. [10] and [11]. Envisaging all exceptions, however, is as impossible as foreseeing all the consequences of an action. For this reason laws are often couched in vague terms (reasonable cause and the like) so that particular cases can be decided by the courts in the light of their particular facts. This will mean that whether the rule has been followed or not may require interpretation.

Virtue Ethics

Virtue ethics is the oldest of the three approaches and can be traced back to Plato, Aristotle and Confucius. Its modern re-emergence can be found in [12]. The basic idea here is that morally good actions will exemplify virtues and morally bad actions will exemplify vices. Traditional virtue ethics are based on the notion of Eudaimonia [13], usually translated as happiness and flourishing. The idea here is that virtues are traits which contribute to, or are a constituent of, Eudaimonia. Alternatives are; agent based virtue ethics, which understands rightness in terms of good motivations and wrongness in terms of the having of bad (or insufficiently good) motives [14], target centered virtue ethics [15], which holds that we already have a passable idea of which traits are virtues and what they involve, and Platonist virtue ethics, inspired by the discussion of virtues in Plato’s dialogues [16] . There is thus a wide variety of flavors of virtue ethics, but all of them have in common the idea that an important characteristic of virtue ethics is that it recognizes diverse kinds of moral reasons for action, and has some method (corresponding to phronesis (practical wisdom) in ancient Greek philosophy) for considering these things when deciding how to act. Because there are few exemplars of implementations using the virtue ethics approach in agent systems, we will provide our own way of implementing a version of virtue based ethics in an agent system, based on value-based practical reasoning [17] , which shows how an agent can choose an action in the face of competing concerns. The various varieties of virtue ethics do, of course, have a lot more to them, but again, given that we are considering weakly ethical agents, these considerations and the particular conception of virtue will belong to the designers, who will implement their agents to behave in accordance with their notions of virtuous behavior, through the provision of a procedure for evaluating competing values.

** Mill wrote in [45] “It is better to be a human being dissatisfied than a pig satisfied; better to be Socrates dissatisfied than a fool satisfied.” Bentham, in contrast stated “If the quantity of pleasure be the same, pushpin is as good as poetry.”

1. M. Minsky, Semantic Information Processing, MIT Press, 1968

2. J. Bentham, The Rationale of Reward, John and HL Hunt, 1825

3. J.S. Mill, Utilitarianism, Longmans, Green and Company, 1895

4. J. Bentham, The Rationale of Reward, John and HL Hunt, 1825.

5. I. Kant, The Moral Law: Groundwork of the Metaphysics of Morals, first published 1785, Routledge, 2013.

6. C.M. Korsgaard, The Sources of Normativity, Cambridge University Press, 1996.

7. J. Rawls, A Theory of Justice, Harvard University Press, 1971.

8. T. Scanlon, What We Owe to Each Other, Harvard University Press, 1998.

9. H. Prakken, Logical Tools for Modelling Legal Argument, Kluwer Law and Philosophy Library, Dordrecht, 1997.

10. T. Bench-Capon, Relating values in a series of Supreme Court decisions, in: Proceedings of JURIX 2011, 2011, pp. 13–22.

11. T. Bench-Capon, S. Modgil, Norms and extended argumentation frameworks, in: Proceedings of the Seventeenth International Conference on Artificial Intelligence and Law, ACM, 2019, pp. 174–178.

12. G.E.M. Anscombe, Modern moral philosophy, Philos. 33 (124) (1958) 1–19.

13. H. Rackham, Aristotle. The Eudemian Ethics, Harvard University Press, Cambridge, 1952.

14. M. Slote, The Impossibility of Perfection: Aristotle, Feminism, and the Complexities of Ethics, OUP USA, 2011.

15. C. Swanton, Virtue Ethics: A Pluralistic View, Clarendon Press, 2003.

16. J.M. Cooper, D.S. Hutchinson, et al., Plato: Complete Works, Hackett Publishing, 1997.

17. K. Atkinson, T. Bench-Capon, Practical reasoning as presumptive argumentation using action based alternating transition systems, Artificial Intelligence, 171 (10–15) (2007) 855–874.

If you liked this article, please feel free to clap for it. You can contact me through LinkedIn; I am always ready to have a meaningful conversation. If you want to keep reading the articles, I write, please hit the follow button. You will be notified when I publish an article.

Don’t forget to give us your ? !


Ethical approaches and autonomous systems. was originally published in Becoming Human: Artificial Intelligence Magazine on Medium, where people are continuing the conversation by highlighting and responding to this story.

Via https://becominghuman.ai/ethical-approaches-and-autonomous-systems-26cff6a91326?source=rss—-5e5bef33608a—4

source https://365datascience.weebly.com/the-best-data-science-blog-2020/ethical-approaches-and-autonomous-systems

Published by 365Data Science

365 Data Science is an online educational career website that offers the incredible opportunity to find your way into the data science world no matter your previous knowledge and experience. We have prepared numerous courses that suit the needs of aspiring BI analysts, Data analysts and Data scientists. We at 365 Data Science are committed educators who believe that curiosity should not be hindered by inability to access good learning resources. This is why we focus all our efforts on creating high-quality educational content which anyone can access online.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Design a site like this with WordPress.com
Get started