ETHICS PART 3: VALUES
The concept of "values" conjures up an often confusing mixture of different and even conflicting ideas. We've heard of "family values", the value of a dollar, of a house, of love, etc. Maybe we know a strange friend that places tremendous value on something like a stamp collection or what we view as a heap of junk in their basement. Can one's junk really be another person's treasure? Are all values subjective, or can we identify some of them objectively? Values are the building blocks of ethics, as morality itself involves how to identify and achieve our values. Just as the base of epistemology was concepts, in ethics it's values.
VALUES
Values are only possible to entities that are goal oriented, and as we learned in Ethics Part 1, all life is goal oriented. The concept of values is not a primary, nor an axiom like we used in metaphysics. It requires examination. It presupposes the question, "of value to whom and for what?"
A value is something that a person or entity acts to gain and keep. In other words, you want to obtain values. But why do you want them? The answer to that is because there is a fundamental choice in your life as with all living things, existence or non-existence. Life or death. All values ultimately originate because we have the possibility of dying. In order to live we need certain things, and those things are not guaranteed to us; we have to act in order to gain them, and act in order to keep them. We must seek out things like food and water, or we cease to exist. Food is not automatically given to us, if it was it wouldn't be a value. Things that are automatically given are not values, but realities of existence. The element of an alternative must exist in order for there to be a value. Gravity, for instance, is not a value, it's a metaphysical reality. Gravity may be useful, or a requirement for certain things to exist that we may value, but it's not a value in itself as there is no way to act to gain or keep it.
As babies, we are first aware of this reality through the pain/pleasure mechanisms built into our bodies. We don't like being hungry, it feels bad, and thus a baby will cry when it gets hungry. It doesn't understand what food or water is yet, it just knows that it wants it because of the pain/pleasure mechanism. Similarly, a baby doesn't like to be cold, hot, or stuck with a needle. It doesn't like a loud siren, nor the smell of rotten eggs. Later on, when the baby starts forming concepts, it will have a whole new set of values to explore (happiness, friendship, etc.) but they originate here, because of the body's built in desire to live. It's important to understand that no matter how complex our values get, they are only possible because there are alternatives in reality, and that we have the possibility of living or dying. In order to clarify this point, let's theorize the "immortal robot".
Values are only possible to entities that are goal oriented, and as we learned in Ethics Part 1, all life is goal oriented. The concept of values is not a primary, nor an axiom like we used in metaphysics. It requires examination. It presupposes the question, "of value to whom and for what?"
A value is something that a person or entity acts to gain and keep. In other words, you want to obtain values. But why do you want them? The answer to that is because there is a fundamental choice in your life as with all living things, existence or non-existence. Life or death. All values ultimately originate because we have the possibility of dying. In order to live we need certain things, and those things are not guaranteed to us; we have to act in order to gain them, and act in order to keep them. We must seek out things like food and water, or we cease to exist. Food is not automatically given to us, if it was it wouldn't be a value. Things that are automatically given are not values, but realities of existence. The element of an alternative must exist in order for there to be a value. Gravity, for instance, is not a value, it's a metaphysical reality. Gravity may be useful, or a requirement for certain things to exist that we may value, but it's not a value in itself as there is no way to act to gain or keep it.
As babies, we are first aware of this reality through the pain/pleasure mechanisms built into our bodies. We don't like being hungry, it feels bad, and thus a baby will cry when it gets hungry. It doesn't understand what food or water is yet, it just knows that it wants it because of the pain/pleasure mechanism. Similarly, a baby doesn't like to be cold, hot, or stuck with a needle. It doesn't like a loud siren, nor the smell of rotten eggs. Later on, when the baby starts forming concepts, it will have a whole new set of values to explore (happiness, friendship, etc.) but they originate here, because of the body's built in desire to live. It's important to understand that no matter how complex our values get, they are only possible because there are alternatives in reality, and that we have the possibility of living or dying. In order to clarify this point, let's theorize the "immortal robot".

THE IMMORTAL ROBOT
While the idea of a truly immortal robot isn't possible in our world, because even robots can break down and need things like electricity to function, we'll have to think about it abstractly as though it could exist. Imagine a robot that could never be damaged or destroyed, and would live forever with an infinite power source. It has the ability to go anywhere and do anything, or go nowhere and do nothing. Neither would affect its "mortality". Now let's think about this robot's code of ethics. How would you recommend it live its life? What should it do? Should it go fishing, or go to a restaurant to enjoy a good meal? It has no need of nutrition, and thus has no feeling of hunger or fullness, and no pleasure/pain mechanism to allow it to enjoy food. The robot has no concern about food one way or the other so it is of no value. Should it go to the doctor for a checkup? Illness or injury isn't an issue, so something like a doctor wouldn't be needed or valued. Should it build itself a house? It needn't be concerned about the elements, so it has no need of shelter, nor need of any comforts like heat or air conditioning. As you can see, without the pleasure/pain mechanism, we have removed the possibility of satisfaction or dissatisfaction on the physical level. Things like food, health care, houses, a massage, etc. are of no value to the robot.
How about on the psychological level? Let's assume this robot is very advanced and can even form concepts like a human. Is gaining knowledge a value for the robot? Why and for what purpose? The robot has no need for gaining knowledge, as it has no impact on achieving anything it needs. It might have a choice to learn about economics or not, but would it really impact its life? What would learning about economics accomplish? Would the robot value gold or a $100 bill? What should it buy? A Porsche? It has nowhere to go, and no means of feeling "excitement", since it's life is never in danger. Go on a Hawaiian vacation to relax? Relax from what? It doesn't need to work. Plus, why would it enjoy sitting in the sun, hot or cold is irrelevant to it. Maybe it could enjoy art or the beauty of a sunset? Art and esthetics are a result of holding certain values and the emotion that comes with them.
While the idea of a truly immortal robot isn't possible in our world, because even robots can break down and need things like electricity to function, we'll have to think about it abstractly as though it could exist. Imagine a robot that could never be damaged or destroyed, and would live forever with an infinite power source. It has the ability to go anywhere and do anything, or go nowhere and do nothing. Neither would affect its "mortality". Now let's think about this robot's code of ethics. How would you recommend it live its life? What should it do? Should it go fishing, or go to a restaurant to enjoy a good meal? It has no need of nutrition, and thus has no feeling of hunger or fullness, and no pleasure/pain mechanism to allow it to enjoy food. The robot has no concern about food one way or the other so it is of no value. Should it go to the doctor for a checkup? Illness or injury isn't an issue, so something like a doctor wouldn't be needed or valued. Should it build itself a house? It needn't be concerned about the elements, so it has no need of shelter, nor need of any comforts like heat or air conditioning. As you can see, without the pleasure/pain mechanism, we have removed the possibility of satisfaction or dissatisfaction on the physical level. Things like food, health care, houses, a massage, etc. are of no value to the robot.
How about on the psychological level? Let's assume this robot is very advanced and can even form concepts like a human. Is gaining knowledge a value for the robot? Why and for what purpose? The robot has no need for gaining knowledge, as it has no impact on achieving anything it needs. It might have a choice to learn about economics or not, but would it really impact its life? What would learning about economics accomplish? Would the robot value gold or a $100 bill? What should it buy? A Porsche? It has nowhere to go, and no means of feeling "excitement", since it's life is never in danger. Go on a Hawaiian vacation to relax? Relax from what? It doesn't need to work. Plus, why would it enjoy sitting in the sun, hot or cold is irrelevant to it. Maybe it could enjoy art or the beauty of a sunset? Art and esthetics are a result of holding certain values and the emotion that comes with them.

Would it value having a friend, maybe another robot around to enjoy life? To answer that, we need to analyze what friendship means (which was done in Epistemology Part 6). A friend is someone who you esteem because they share at least some of the same values with you. You can't be a friend with a rock (unless you're delusional). In order to have a friend, you must first hold at least some values, and seek others who hold values. The robot has no values yet, so how could it have a friend? Which values would it look for in another robot, and how could another enhance its life? Could a robot be like Wall-E and fall in love with an "Eve"? As far as pursuing a "lady" robot, without any pleasure to be had in that area, it also becomes pointless! How about happiness, can the robot pursue happiness? Happiness results from achieving one's values, so no happiness is possible or relevant for the robot. As you can see, because our robot lacks the fundamental alternative of life and death, no subsequent values are possible to it. The robot is doomed to an eternity of pointless activities and a life without meaning. To something that is indestructible no values are possible. Values only come about because an entity is capable of being destroyed, and has the power to prevent it. This power to prevent destruction results in a reason to act. The reason to act is to prevent destruction, to further one's life. This is the ultimate goal, the ultimate value, and all subsequent values are a derivative of it. The concept of values can only come about because of the concept of life, and the threat of death.
GOOD AND EVIL
If life is the ultimate value, then one's own life is the standard of value to which everything else must be judged. That which furthers one's life is the good, that which threatens it is evil. Simpler organisms have no need of ethics because their manner of survival is already chosen for them. "Lower" animals don't have the ability to understand good and evil, nor does their survival require it. Animals that act on the level of percepts or sensations are programmed to respond to their environment while upholding their ultimate value (life), within their respective abilities. A fish will seek cover when it senses a threat, and a rattlesnake will instinctively coil up and prepare to strike at a perceived intruder. Even a squirrel, who seems to "think ahead" and gather nuts for the winter, is programmed to do that task. You don't see squirrels lounging about saying, "I think I'll just enjoy this summer, gathering nuts is too much work!". There might be some instances like a flood or volcano that is beyond an animal's capability to deal with, but then they have no choice in living or dying. An animal can be destroyed, but they cannot pursue their own destruction, they just operate within their biological abilities. There is no such thing as a good or evil animal, even if we might consider puppies as good and sharks as evil! To be capable of good or evil requires that an entity have a choice about the matter.
(Ok, maybe this cat is evil)
If life is the ultimate value, then one's own life is the standard of value to which everything else must be judged. That which furthers one's life is the good, that which threatens it is evil. Simpler organisms have no need of ethics because their manner of survival is already chosen for them. "Lower" animals don't have the ability to understand good and evil, nor does their survival require it. Animals that act on the level of percepts or sensations are programmed to respond to their environment while upholding their ultimate value (life), within their respective abilities. A fish will seek cover when it senses a threat, and a rattlesnake will instinctively coil up and prepare to strike at a perceived intruder. Even a squirrel, who seems to "think ahead" and gather nuts for the winter, is programmed to do that task. You don't see squirrels lounging about saying, "I think I'll just enjoy this summer, gathering nuts is too much work!". There might be some instances like a flood or volcano that is beyond an animal's capability to deal with, but then they have no choice in living or dying. An animal can be destroyed, but they cannot pursue their own destruction, they just operate within their biological abilities. There is no such thing as a good or evil animal, even if we might consider puppies as good and sharks as evil! To be capable of good or evil requires that an entity have a choice about the matter.
(Ok, maybe this cat is evil)
Man is different. We are a conceptual being with free will, and thus are able to learn and project alternatives in our mind. We have the ability to choose several different courses of action in any given situation. We do not always automatically pursue the path that is best for preserving our life, and often can pursue actions that threaten it, or harm us both physically and psychologically. Man can overdose on drugs or jump off a ledge, but can also go to rehab and talk a friend down from a ledge. Man can create and support a totalitarian government that slaughters its own citizens, or write the US Constitution. In short, man has the ability to be good and evil.

MORALITY
While other animals needn't concern themselves with ethics (nor could they if they wanted to), we do because of our conceptual nature. Since the proper path isn't given to us, we must choose which values to accept and to live by. This is the purpose of morality, which is a code of values accepted by choice. Morality is where we identify our values and how to achieve them. Every person follows some type of morality, whether they consciously identify it or not, as one is essential to even basic survival. Whether or not one's morality is consistent in the pursuit of furthering one's life is certainly not a given, and must be studied and given much thought to. Many people choose morality by a "seat of the pants" approach, or just follow what is popular or dominant in a society. The results of these approaches are as varied as the number of people who try them, although few make much sense when examined closely.
The basis of one's morality must identify the fundamental value, which is the preservation and furtherance of one's life. That is the starting point of which the study of morality should begin. This doesn't mean that we should act on any impulse or thought that might seem to further our lives. It doesn't mean we should steal our neighbor's food so we can eat, use a gun to get what we want, or ignore our fellow humans. Our goal is not to further our existence as hyenas or sharks, but as humans. While we have the same goal of preserving and enhancing life, our means of achieving that goal is dramatically different. Because of our ability to reason, our goals are far more complex than simply getting our next meal, or satisfying our immediate desires. Our values also include things like friendship, love and happiness, which means the proper code of ethics must take those values into account. Thus, the study of human morality begins.
The next part of ethics will expand upon the idea of values, and dive further into ethics by examining virtues and principles.
While other animals needn't concern themselves with ethics (nor could they if they wanted to), we do because of our conceptual nature. Since the proper path isn't given to us, we must choose which values to accept and to live by. This is the purpose of morality, which is a code of values accepted by choice. Morality is where we identify our values and how to achieve them. Every person follows some type of morality, whether they consciously identify it or not, as one is essential to even basic survival. Whether or not one's morality is consistent in the pursuit of furthering one's life is certainly not a given, and must be studied and given much thought to. Many people choose morality by a "seat of the pants" approach, or just follow what is popular or dominant in a society. The results of these approaches are as varied as the number of people who try them, although few make much sense when examined closely.
The basis of one's morality must identify the fundamental value, which is the preservation and furtherance of one's life. That is the starting point of which the study of morality should begin. This doesn't mean that we should act on any impulse or thought that might seem to further our lives. It doesn't mean we should steal our neighbor's food so we can eat, use a gun to get what we want, or ignore our fellow humans. Our goal is not to further our existence as hyenas or sharks, but as humans. While we have the same goal of preserving and enhancing life, our means of achieving that goal is dramatically different. Because of our ability to reason, our goals are far more complex than simply getting our next meal, or satisfying our immediate desires. Our values also include things like friendship, love and happiness, which means the proper code of ethics must take those values into account. Thus, the study of human morality begins.
The next part of ethics will expand upon the idea of values, and dive further into ethics by examining virtues and principles.