Showing posts with label common errors. Show all posts
Showing posts with label common errors. Show all posts

Wednesday, May 28, 2008

Altruism

This post is intended to mutilate the view that there is no such thing as “true” altruism, that however noble our actions are, our true motives are always selfish.

First, the two main versions of this faulty view: The first is that whatever we do for others, we do because we expect something from others in return. The second, more subtle version is that whatever we do, we do because it makes us feel good, and we don’t really do for others, but for this good feeling.

The general refutation will, I hope, lay to rest both forms of the view, but I’ll make this one point regarding the first: Even if we fully expect reciprocal (material or social) reward for what we do for others, there is no reason to think that that is the only, or even the main motive for what we do. When we say someone has “mixed motives”, we mean just that – part of the person’s motive is benevolent, part is selfish. If so, then there are benevolent motives, even if most of the time one is only partially motivated by them.


Now the main argument: Some people get pleasure from doing for others, and some people get pleasure from doing for self. Similarly, one’s conscience can cause one pain in the plight of others, a spring to action, or one can become so dissociated from one’s conscience as not to detect its call and command. One can train oneself, even if one doesn’t have the natural inclination, to take pleasure in the flourishing of others, and to be pained by their distresses (except in certain pathological cases). And if one can, I think you’ll agree, one ought to.

Now both the person who takes pleasure in altruism and the person who takes pleasure in self-indulgence take pleasure in what they do. But they are not equivalent – they are not both selfish. Thus we can distinguish between selfish pleasure, and altruistic pleasure. It is in the definition of the refined character that one take pleasure in the good of others. The naturally altruistic person will have an easier job of it, though will still have to combat the temptation to selfishness from time to time. The naturally selfish person who wishes to refine his or her character must take pains in order to develop this attitude.

In the end it will be worth it for the person, as excessive concern for the self naturally leads to a terribly stifling way of life, and the pleasure that can be taken in altruism is genuine.

Our opponent will then respond: “I am confirmed. I told you that we do everything for pleasure, and therefore even when one acts for others, one really has only one’s self-interest in mind. It may be, as you say, that the pleasure one gets in acting for others is better and deeper than the pleasure one gets in acting for oneself. That just means that if someone is really self-interested, it is in his or her own best interest be concerned for others.”

I have already partially dealt with this point, having argued that people may be willing to experience pain in order to develop a benevolent attitude. But it is still valid to press the case that one does this in calculation that overall this will result in greater personal pleasure, (notwithstanding the fact that many people who make the decision to develop morally often don't think they're doing so for this reason.)

The more profound point is this: Why do people take pleasure in being good to others in the first place? (The neurological explanation is informative, but trivial.) People take pleasure in being good. To feel good is not the same as to feel good about yourself. One feels good about oneself when one does the right thing. This has nothing to do with satisfying one’s libidos or gaining social status, etc. Sometimes, in order to be good, one must forget about oneself and surrender to more important things. A good person can only be happy when he or she does good actions. Paradoxically, then, the confirmation of self sometimes requires an absolution of self. This is hardly the same as mere indulgence in pleasure.

If the pleasure that comes from the recognition of the moral good doesn’t have a different metaphysical source than selfish pleasure, though I think it does, it certainly has a different psychological source.

People judge altruism to be a masked form of self indulgence in order to advance a certain amoral philosophy. But what is often forgotten is that the correct ordering and control of one’s appetites is a crucial part of the development of a moral character. One’s observance of morality is what determines whether a person is good or bad Therefore a morally sensitive person will take pleasure in being good and will feel pain when moral flaw is recognized, and work to regulate one’s attitudes in accordance with one’s principles. The amoral person will not regard the morality of one’s actions as valuable in determining which pleasures are to be prioritized, but only how good they feel.

If one takes pleasure in altruism, then that person deserves our highest respect, and we shouldn't think that our respect is a necessary motivator for that person, or a condition of his or her happiness.

Sunday, May 18, 2008

Good = The Will of God?

This is an adaptation and extension of an argument by Plato, in the dialogue Euthyphro.

Let us say, as many do, that the definition of The Good is nothing more and nothing less than the Will of God. That would mean that whatever world God creates would be, by definition, perfectly good. Our world, then, is not the best world God could have created – any world must, by definition, be perfectly good. That means that a world far more miserable than ours, e.g., one where suffering is not justified, one in which there is no prospect of salvation, must be equally good as ours, if God were to decide to create such a world. Equally, for those who believe that all morality requires God as moral arbiter, then whatever morality God decides is objectively moral, even if it were the opposite of justice and morality in our world, even if it would promote dishonesty, selfishness, unjust privelege and liability, etc.

In such a case, though we may be beholden to God and subject to His reward and punishment, there is nothing “good” about the world or about moral behavior as dictated by the command of God. That is to say, though perhaps we would be grateful that God created us at all, we should not be grateful that He created a world for us that is good. Any world God would create is good by definition.

But the faithful believe that not only is God the creator, not only does He write the rules of the game, but that the rules are fair, just, and good.

Conclusion: There is a criterion for what counts as good independent of God and God’s will. Thus, if God is truly good, He is good according to a criterion He does not Himself create.

Friday, January 25, 2008

Voting: Meaningless or Downright Wrong

If the election is won by a landslide, then your one vote and all of your friends' 50 votes, are meaningless. Every vote counts? Nice dream, but this is a game of numbers and those kinds of numbers don't make a difference.

The only way your vote has any chance at all to count is if the race is neck to neck. And in that case the vote does NOT REPRESENT WHAT AMERICANS WANT; it means that ALMOST HALF of the nation is getting screwed. In a landslide, where your own vote doesn't count, at least the election as a whole vaguely represents what the people want. If the race is close, then, perhaps your vote does count, but it means that the system is not accomplishing what it set out to do, to represent the American People.

So you have your choice.

"On Election Day, I stayed home. And I did essentially what you did. The only difference is that, when I got finished masturbating, I had a little something to show for it."
-George Carlin

Sunday, October 7, 2007

Stupidities of Science pt. 2; On positive and negative charge

It is extremely stupid that excess of electrons should be called negative charge and deficiency of them called positive charge in current scientific language. When they teach you about it in school they tell you, if they try to justify it at all, that because the mathematical and utilitarian results are the same when you're dealing with positive and negative numbers, it doesn't make a difference which you call which. They tell you that it's arbitrary. I disagree.  It does make a difference conceptually. It is not arbitrary that we associate the addition of something with positive numbers and the subtraction of something with negative numbers. That’s what’s happening, and the language in which it is described should reflect what’s happening. Science isn’t merely a tool for industry, it is supposed to be an accurate picture of what is going on, a picture that takes what’s happening in the world and represents it most accurately to the one kind of being to which pictures are significant: The mind. 

“But”, one may argue, “work for it for long enough, you conceptually get what’s going on anyway. Once you learn the language, even though it’s not intuitive, you get the picture. You see a positive as a deficiency of electrons even though the event that happens in nature is a subtraction.” When we’ve worked with it for a while we don’t have to do the extra step of reversing the signs in our heads to relate to the physical world with an accurate concept, but why should we have to do it while learning it? Or if we did not do it when we learned, and just studiously balanced the equations without regard for proper conceptualization, how well do we relate to the accurate picture even having worked with the conventional model long enough?

Same issue with the conventional representation of the flow of electrical current from positive to negative rather than the direction charge actually flows, from negative to positive. Now, in all the rest of physics energy does flow from point of excess to point of deficiency, as heat flows from high temperature to low temperature, as fluid flows from high pressure to low pressure, as water flows from high elevation (potential energy) to low. This is in fact the reason that conventional current is represented this way. The same early researchers that concluded that what we now call positive charge was an excess rather than a deficiency as it actually is, concluded that it flowed rather than being flowed into. So there is a point in representing electricity as flowing the wrong way: To use a flow from positive to negative is intuitive. But in light of what has been said above, it should be clear why this is wrong. If we regard the electron as the electrically positive charge then we can represent the actual flow of electrons from point of high concentration to low concentration and retain the intuitiveness of positive-to-negative in the numerical representation.

To me, this is obvious. I do not see how it is not obvious to anyone who has ever given it any thought viz. the scientists. I understand there are historical reasons why this convention has been used all the years, but there is no sufficient justification for its continued use given what we know today and have in fact known for quite some time.

When scientists decided that Pluto wasn’t a planet anymore, they said so and within a week no one was fighting about it anymore. In the name of accuracy we change what we call things. I don’t think it will be hard to learn, even for people who have been working with the conventional model all their lives. In fact, I am confident that, once learned, the correct system will be easier to work with, for expert and beginner, (though even if it were decidedly harder, it would still be proper to use the more accurate models. It is harder to learn to solve gravity equations using Einstein's methods than Newtons, but we do it). Most of the people in the fields that have to think about this are pretty smart. Smart enough to learn to change the way they write things subtlely, and hopefully smart enough to see why it is sensible and correct to do so.

So here’s what I suggest. From now on, when you do chemistry homework or write chemistry papers and books and so on, write on the top or in the introduction that you’re using the more accurate conventions and leaving the old behind, stay consistent throughout your work, and let your teachers and students learn the new conventions on the fly, even become comfortable with them. Your answers and results are correct even if not stated in the conventional language and your teachers and students will have to acknowledge that. And in electricity classes, from basics onward - and if the basics have not been taught this way it can be introduced at any level; again we're dealing with smart people here - instead of teaching “conventional current flow” and incidentally showing “physical current flow” because that’s the way it really works, teach it the correct "physical" way and incidentally show the “conventional” way because your better students will have to deal with the historical conventions in historical texts.

Tuesday, July 31, 2007

Monday, June 25, 2007 The atom cannot be divided

The idea of atoms goes back to Greek times, to at least 400 BCE. The meaning of the word "atom" is, basically, "indivisible thing". The theory states that if you break something down into its fundamental constituents, there will reach a point where you can't break it down any more. There is a bottom level to the analysis of a thing into its parts.
In modern chemistry, born in the early 1800s, the term atom has been taken to mean a certain cluster of matter, ones that during the early to mid 1800s were thought to be the bottom level of matter, not analyzable further into its smaller particles. But in fact, "atoms" in this sense are broken down into protons neutrons and electrons, these into quarks, strings, who knows?
The use of the word "atom" in science today is inappropriate. We are still in search for the atom. For, by definition "atom" means the lowest level; if it is not the lowest level, it can still be broken down further, and it is not the atom.
The improper understanding of this idea results in my being pissed off in this scenario: When a scientist will say that the fact that the "atom" is made up of more fundamental particles shows that Democritus' ancient atomic theory was shown false, since atoms are found to be divisible, and the ancient theory clearly states that the atom is not divisible. This is wrong, the atom of Dalton, Mendeleev and Rutherford did not turn out to match the original concept of the atom .  Dalton and the founders of modern chemistry chose the name of the particle prematurely. Perhaps what we now call the quark is the atom? Perhaps the superstring? Democritus would still be vindicated if realtiy turns out to consist of such elements. The atom cannot be divided.

Sunday, April 29, 2007 A Fallacy in Brain Science

Recently, while reiterating in a new context the classical discussion concerning the nature of altruism, that is, whether we humans actually do things for other people or whether our motives are always selfish, someone submitted a comment which I believe contains a common fallacy among contemporary students of human nature.

The framework for the classical discussion is usually this: On the one hand, it is patently obvious that people do things for other people all the time. This is taken to the extreme – but extremes are allowable in discussions of this sort – when one is willing to die for someone else, which, on the surface anyway, offers little or no benefit to one's self. On the other hand, one always has an internal motive for acting for another, either to improve one's reputation, or to gain favors, or, because it just plain makes the person feels good to act for another. Even this good feeling is thought of as a selfish thing on this view. Furthermore, in a case where one is willing to die for another, it is, on this view, a result of the person calculating selfishly that death is preferable to living with guilt or shame or without another person etc. There are powerful and interesting arguments on both sides which may make interesting material for a future blogging.

Well, this time around, one of the discussants offered this argument from neuroscience/biopsychology. Altruism has been discovered in the brain, he claimed. There is a region in the brain that fires during altruistic acts, or that is better developed in altruistic people than in more selfish people, etc. This was submitted as evidence that true altruism does exist, since it has been physically discovered in the brain.

The fallacy is that the discovery of brain phenomena is no evidence one way or another for the existence of psychological or behavioral phenomena. Imagine that I get to observe a person for a week, with the limitation that I can only see that person's brain. I see section X of the brain fire in pattern Y. I would have absolutely no idea what the person is doing/thinking/feeling unless I have already linked that firing pattern with that non-neural activity (and probably even in such a case, but lets ignore that). Imagine then that I have only observed brains and never made those connections with non-neural states. It is clear that I would have no idea of the connection between the brain phenomena and "human" phenomena without also having observed the human activity. So in order to find altruism, or anything else of the sort, in the brain, I must already have in mind a certain class of human activity, must already have judged those things as altruistic, and only then can I make the link to brain activity. Once I have made these judgments, even before I find it in the brain, I already know the phenomenon exists, and the view from the brain merely offers me another perspective, indeed an informative one, on it. Conversely, if we judge that this activity is not altruistic, then the associated firing pattern in the brain must be judged not to be linked with altruism.

It is a common error in a common contemporary worldview to think that phenomena of human nature only exist if they can be found in the brain. If we start with the brain, as we have seen, we don't know what any of its phenomena mean unless we observe the associated body/mind phenomena. And if we start with the body/mind phenomena, then we don't need the brain phenomena to confirm their existence. Even if we have them, they prove nothing vis-à-vis the mind/body phenomena in question.