Search results
Results From The WOW.Com Content Network
The appeal to an objective through contingent human nature (perhaps expressed, for mathematical purposes, in the form of a utility function or other decision-theoretic formalism), as providing the ultimate criterion of "Friendliness", is an answer to the meta-ethical problem of defining an objective morality; extrapolated volition is intended ...
With respect to moral reasoning, some consider humans to be suboptimal information processors, moral judges, and moral agents. [1] Due to stress or time constraints, people often fail to consider all the relevant factors and information necessary to make well-reasoned moral judgments, people lack consistency, and they are prone to biases.
The two main approaches proposed to enable smart machines to render moral decisions are the bottom-up approach, which suggests that machines should learn ethical decisions by observing human behavior without the need for formal rules or moral philosophies, and the top-down approach, which involves programming specific ethical principles into ...
Utilitarian Jeremy Bentham discussed some of the ways moral investigations are a science. [9] He criticized deontological ethics for failing to recognize that it needed to make the same presumptions as his science of morality to really work – whilst pursuing rules that were to be obeyed in every situation (something that worried Bentham).
Moral enhancement [1] (abbreviated ME [2]), also called moral bioenhancement (abbreviated MBE [3]), is the use of biomedical technology to morally improve individuals. MBE is a growing topic in neuroethics , a field developing the ethics of neuroscience as well as the neuroscience of ethics.
Technoethics (TE) is an interdisciplinary research area that draws on theories and methods from multiple knowledge domains (such as communications, social sciences, information studies, technology studies, applied ethics, and philosophy) to provide insights on ethical dimensions of technological systems and practices for advancing a technological society.
James H. Moor, one of the pioneering theoreticians in the field of computer ethics, defines four kinds of ethical robots.As an extensive researcher on the studies of philosophy of artificial intelligence, philosophy of mind, philosophy of science, and logic, Moor defines machines as ethical impact agents, implicit ethical agents, explicit ethical agents, or full ethical agents.
The idea has philosophical roots in Hobbes (who claimed reasoning was "nothing more than reckoning"), Leibniz (who attempted to create a logical calculus of all human ideas), Hume (who thought perception could be reduced to "atomic impressions") and even Kant (who analyzed all experience as controlled by formal rules). [72]