Morality

<p>How we make moral choices concerning what is right and wrong is absolutely subjective since there is no universal moral code. Everyones view on morality is different based on the aspects of their life that make up their identity. Therefore the moral code is based upon the specific individual which then causes altercations to occur frequently. The conscience which informs you if an action is 'right' or 'wrong' differs from person to person. </p>

<p>What is your view on the subject of morality? Do you believe that somehow everyones conscience is the same, or diverse?</p>

<p>My primary ethical prerogatives mirror those of the existentialist and the egoist, although I also strongly support P. property "rights" (and I agree w/ relativism -> subjectivism as well).</p>

<p>I ultimately reject utilitarianism, and so, in all the many t. experiments, choose not to press the button/savethemajority,etc.</p>

<p>I believe that morality isn't subjective, since most people adhere to the same basic morals: killing is wrong, stealing is wrong, ect. </p>

<p>The degree to which each is wrong, and whether smaller things are right or wrong, as well as what is right and wrong in certain situations, may differ between people, but generally people have the same morals. For example, people differ on whether abortion is right or wrong, while still believing that killing, in general, is wrong.</p>

<p>Morality is the herd instinct in the individual.</p>

<p>Of course the concept of morality differs from person to person... Conscious is built up by individual experiences and perceptions. However, the question of what is right and wrong is usually clear cut. The few controversies that arise(such as abortion and euthanasia) is the debate on which is "more" right, just humans fighting for their say according to their own conscious beliefs.</p>

<p>I believe in relative morality (there are no absolute rights or wrongs), but I believe in utilitarianism (greatest good for the greatest number) as a general guideline for my personal moral code and support it as a construct for a social code (i.e. the law.) :)</p>

<p>Morality does not exist, only the basic instint to survive does. It is a mad-made deivce used to keep people in line, kind of like government. However, deep down, and in the most raw situations, the innate, yet latent, instints come out and triumph over "morality."</p>

<p>If one acknowledges that there's no "good" and "bad," how can one claim to subscribe to utilitarianism (the greatest good for the greatest number)? It seems like a pretty glaring inconsistency in thought.</p>

<p>Sorry if I sound ignorant, but could some of you guys post the definitions of your philosophies, or belief systems?</p>

<p>@ Melancholy: Good point; that's why utilitarianism is so complicated. "Good" is impossible to quantify. I think that utility is inherently subjective, and I have to resort to using my own judgment to define it.</p>

<p>^Under your definition, utilitarianism falls apart. If good is impossible to quantify, how can you determine what is the "greatest good"? And if good is subjective, how can you claim to achieve good for others? Your definition of "good" might be different from theirs.</p>

<p>...Which is why I use utilitarianism as my own personal moral code. Like, the one to which I most closely align my beliefs and would apply in a "moral" question. </p>

<p>And yeah, that sounds dumb given the context of a subjective "good" or utility. But there are some cases where "greatest good" could be easily defined, e.g. as "greatest number of lives saved" in the stereotypical "kill one person to save millions of others?" scenario. (Of course, this would be complicated if we were to evaluate the overall effects of these millions of people on the world... but it would still be a pretty straightforward answer.)</p>

<p>^Kill one person to save millions of others? It's not always that simple. What if that one person is Michelangelo, and the other million are suicidal, psychotic, sadistic freaks of nature who've killed their brain cells with crack? (Extreme example, I know.)</p>

<p>Also, if your code is personal and subjective, how can you use it to decide the fate of millions of others? If I were to decide tomorrow, as a personal, subjective decision, that death is good, that wouldn't be a major problem - I'm only hurting myself. But if I was a utilitarian, my code would force me to kill as many people as possible. I can't blame you for living be a subjective code, but you can't force your personal values on others.</p>

<p>I'm not trying to force my personal values on others, which is why I keep clarifying that I generally live my own life, and make my own decisions, in a way that is loosely aligned with a certain school of thought.</p>

<p>But utilitarianism, by definition, involves others. (It is rare that the "greatest number" will only include you.) It forces you to spread that which you value as "good" to as many other people as possible. I live by a subjective morality, but the decisions I make only affect my own life. If I am wrong, I will be the only one to pay for it. Your code, however, requires decisions that will affect others (see the above question of killing one person to save millions). When you use your subjective code to make such a decision, your "personal" code affects the lives of those millions. Sorry, but I don't see how you could call a code "personal" when it is centered around others.</p>

<p>"I live by a subjective morality, but the decisions I make only affect my own life. If I am wrong, I will be the only one to pay for it."</p>

<p>So what happens if you're forced to make a decision that involves other people, using your subjective code? </p>

<p>I doubt that either of us will be faced with a situation that involves choosing between one person dying or a million people dying. But either way, our decision would affect others... so I don't understand the difference between your decision, based on your subjective code, and my decision, based on my subjective code that favors the greatest utility. </p>

<p>An example of how I might use utilitarianism: Say I'm having a party and it's my decision as to what food we order. All of my friends really want to eat pizza, but I don't like pizza. I order it anyways because it will make the greatest number of people happy, even though it's not what I'd really like, and I have the power to order whatever I want. Sure, this decision isn't anything out of the ordinary -- it's the decent thing to do -- but you can't argue that it is aligned with utilitarianism.</p>

<p>In that case, it's fairly easy to define "good" to be whatever will make your friends happy. But what if your friends wanted beer? It would probably make them happy, at least in the short term, but maybe you're worried about the possible consequences. (I believe there was a whole thread in HSL on the allure of drinking/getting drunk, which discusses these points in greater detail.) Can underaged drinking ever be a "good" thing? It's clearly a subjective thing, depending on the person and their situation. I couldn't use my personal definition of "goodness" to make this decision for my friends. Since "good" is such a subjective concept, I can't claim to work for the good of others. How can you?</p>

<p>I believe in shades of gray with morality. I think when our lives or personal well being is not in danger we tend to choose along the lines of conventional morality (hurting others is bad, helping others is good). That being said, who is to say that doing so isn't a evolved trait? Doesn't benevolence usually benefit the person bestowing it, in the long run? If I help you now, isn't it more likely that you will help me later? </p>

<p>Also, we tend to have difficulty with morality on a grand scheme, where the beneficiaries of our moral choices aren't close to us. We all see those ads and commercials where for an x amount of money we can provide food/ shelter/ medication/ education for some child in a far-off country. I doubt a large percentage of us make contributions to such charities, but does that make us immoral?</p>

<p>@ StellaNova: I would say that drinking would have the least 'utility' given its possible consequences. </p>

<p>But now I'm starting to question utilitarianism after discussing it with a friend. Starting with the vague "kill one person to save a greater number" scenario, he reminded me that letting someone die was not necessarily killing them, using the example that if he were to watch someone bleed to death and not save them, he wouldn't have done anything to CAUSE their death. I could instantly see other scenarios, like if seven people were dying and each needed a vital organ, how the "utilitarian doctor" might kill an innocent person... greatest good for the greatest number... :/ kgjfkljgfdlk so now I don't know. I have to think about it.</p>