The blog of Dr Glenn Andrew Peoples on Theology, Philosophy, and Social Issues

On the evolution of moral beliefs

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

I’m a moral realist. That means that I think there really are some moral facts. It is wrong to do some things, and it is right to do some things, and this isn’t just a vent of emotion or an expression of my will, it’s really true. Stephen Law is also a moral realist, but if I’m reading him rightly in his debate with William Lane Craig on the existence of God or in his more recent discussion with me on the Unbelievable radio show where I discussed the moral argument for theism, he’d sooner give up moral realism than accept theism.

An argument I sketched in that discussion was that the best way to explain moral facts is by reference to God. Although he does currently believe in moral facts, he noted that they may not be there after all, so maybe there’s no reason to invoke God as an explanation. After all, he said, we can come up with an evolutionary explanation of why we would believe in moral facts whether they really existed or not. Law wants to be careful here. At the time I raised the concern that this may just be a case of the genetic fallacy, offering an explanation of where a belief came from as though this showed or suggested that the belief is false. But this isn’t what Law means to say, he replied. The point is not that the existence of an evolutionary account of why moral beliefs exist shows that those beliefs are false. That would indeed be the genetic fallacy at work. No, the point is that whether those beliefs are true or false, there exists the same evolutionary account for why we hold them – and that account is unaffected by their truth or falsehood. There is thus no particular reason to think that the evolutionary processes that brought them into being is likely to produce true-belief forming processes.

While this line of argument does not purport to show that the moral beliefs we hold aren’t true, it’s meant to cast doubt on the probability that the process that gave rise to these beliefs (or at least the process that gave rise to the relevant belief forming processes) is likely to result in either true beliefs or reliable belief forming faculties. It’s best to think in terms of the latter, if only because it’s downright bizarre to think that evolution forms beliefs. It plainly doesn’t, but it does form mechanisms or processes that creatures use to form beliefs.

So what should we make of this? Can we give an evolutionary account of why we would believe in moral facts, an account that is blind to the actual existence of those facts? Secondly, if we could give an account like this, would it undermine the probability that the processes that form those beliefs are reliable? I will give two answers: Yes, it is trivially true that we can give an account like this, and no, the fact that we can do so should not undermine our confidence in the belief form process that forms moral beliefs. In doing so I will be drawing on an argument by Alvin Plantinga, namely the “evolutionary argument against naturalism.” While I am inclined to think that argument is unsound, many of the insights that it draws attention to are true nonetheless.

It’s trivially true that we can put our imagination to work and come up with an evolutionary story about why it is we form beliefs about moral facts – a story that isn’t concerned with whether or not those beliefs are true. I say that it’s “trivially” true because it’s obviously true, and because we can actually engage in this sort of story-telling for a wide range of belief types, not just moral beliefs. In any evolutionary account of how a given creaturely function came into being, we’re explaining why there might have existed an adaptive advantage for that creature to have that function, and hence why that function (or change in function) might have been preserved. The emphasis falls on whether or not a function confers an adaptive advantage, since the only thing that evolutionary development “cares” about is producing creatures that are better at surviving and reproducing. Since this is true of all creaturely functions, systems, parts or processes for which we wish to provide an evolutionary account, it is also true of our belief forming apparatus and processes. This can be a fairly strange concept to those approaching the issue for the first time, but bluntly stated: Evolution just doesn’t care (in principle) whether or not our beliefs are true. (The caveat “in principle” is important, and it has to do with why I ultimately don’t agree with Plantinga – or Law, but I’ll comment on that soon.) Just as long as we can come up with a story about how holding a certain type of belief might confer an adaptive advantage – quite apart from whether or not the belief is true – we’ve met the challenge, and we’ve told an evolutionary story about why we would hold beliefs even if they weren’t true.

The key thing to note about Law’s suggested argument is that it involves a break between evolutionary development and the development reliable belief forming processes. The two don’t go together – or at least it is suggested that how closely they go together may well be inscrutable. Can we tell this sort of story about moral beliefs? Well actually, Dr Law didn’t exactly tell us how we might do that – but he was pretty sure that we could. And he’s right – we can. Before I give some examples of how that story might go, let’s look at other kinds of stories that we might tell. The fact is that in any given scenario, there is a vast array of false beliefs that would very probably give rise to behaviour that would be good in terms of adaptation and survival. Alvin Plantinga asked us to consider the example of Paul, a member of a species very much like ours (although in a pre-civilised age), on a planet very much like ours. Paul is confronted by a tiger. The most adaptive behaviour, let’s agree, is for Paul to flee as quickly as possible. But of course there are far more false beliefs than true beliefs that would get Paul to run away.

Perhaps Paul very much likes the idea of being eaten, but when he sees a tiger, always runs off looking for a better prospect, because he thinks it unlikely that the tiger he sees will eat him. This will get his body parts in the right place so far as survival is concerned, without involving much by way of true belief. . . . . Or perhaps he thinks the tiger is a large, friendly, cuddly pussycat and wants to pet it; but he also believes that the best way to pet it is to run away from it. . . . or perhaps he thinks the tiger is a regularly recurring illusion, and, hoping to keep his weight down, has formed the resolution to run a mile at top speed whenever presented with such an illusion; or perhaps he thinks he is about to take part in a 1600 meter race, wants to win, and believes the appearance of the tiger is the starting signal; or perhaps . . . . Clearly there are any number of belief-cum-desire systems that equally fit a given bit of behavior.
[Alvin Plantinga, Warrant and Proper Function (New York: Oxford University Press, 1993), 225-226]

Once we realise what kind of examples that will do the job, we see straight away that the same kind of exercise can be done with moral beliefs. Presumably what Law was getting at is that morality is good for our survival. It manifests in behaviour that benefits our species, and that behaviour could well have come about in such a way that the processes that form the belief aren’t reliable in terms of true belief formation. Take a plausible moral belief: It’s wrong to torture people to death. Acting on this belief may play some role in the survival strength of the species that comes to hold it (or on the other hand it plausibly might not, since only the strongest and fittest will be in the position to torture others). But the rise of this belief could go hand in hand with all kinds of behaviours. What sorts of beliefs might reinforce this behaviour (of not torturing people) over time to the point where it became a sort of social taboo – the herd mentality that came to be enshrined as moral fact (this is the type of scenario that Law is painting when he talks about accounts of false beliefs coming to be regarded as true over time). As with Plantinga’s tiger scenario, the options are more or less endless. If we – or at least our ancestors – believed that when we torture people we lose the ability to breathe and die, then they would likely not torture people. Similarly, if they believed that their children would all be stillborn if they ever tortured anyone, they would likely not do it. True, if they themselves were afraid of being tortured and they reasoned that if torture became normal then they themselves might one day be tortured, then they would also be less likely to do it. The point is just that there is a veritable smorgasboard of possibilities when it comes to given an account of how beliefs and behaviour might come together when the driving factor is survival and reproduction, rather than the acquisition of true beliefs.

So yes, what Stephen says is trivially true. We can come up with an evolutionary story about why we might hold all kinds of beliefs – including moral beliefs – even though those beliefs might be false. In this regard moral beliefs are not different from all sorts of other beliefs we hold about the world. That’s point one.

Point two: But surely just being able to tell a story about a given belief is one thing. Painting a broader picture in which the processes that we use to form beliefs is unreliable in general is quite another. If the wider picture of our developing epistemology is one that favours belief forming processes that are reliable in terms of the production of true beliefs, then the fact that we can tell funny stories about how isolated beliefs might have formed becomes a bit of a side-show. Giving an explanation of how something came to be is not the same as giving the most plausible explanation of how it came to be. Let’s go back to Plantinga’s tiger example. It’s true that if Paul thought that the appearance of a tiger signalled the start of a race and he really wanted to win that race, then the appearance of a tiger might get him running. But why on earth would he start putting one foot in front of the other, or exerting greater effort than before? What is he acting on when he does this? Not just the belief that a tiger signals the start of a race, or his desire to win the race. More is needed. What is needed is some fairly fundamental beliefs about what will happen when Paul interacts with the world in a certain way: beliefs about the general effects of gravity (even if Paul doesn’t know it by that name, or much in detail about what it does), beliefs about the actions and reactions between his body and the environment. Obviously Paul wouldn’t start running if he believed that putting his foot on the ground with considerable pressure would cause it to explode (to use an equally silly example). In addition to basic beliefs about how physical interaction takes place, Paul would also need to have some beliefs grounded in inductive reasoning. Sure, moving his legs in a certain way yesterday and a couple of weeks ago might have gotten him moving in the intended direction at high speed, but what does that have to do with what will happen today? For the example to even get off the ground, then, we’ve got to have belief forming processes about the way the physical world works and the way inductive reasoning is done that are, to a reasonable degree, reliable. And then we’ve also got the issue of how Paul knows the object before him is the same sort of thing as any of the other things that produce a similar visual image. I suppose there’s a bit of induction involved here again, in addition to arguably more abstract reasoning involving classification.

It strikes me as more economical to think that the same set of belief forming equipment forms beliefs about a whole range of subjects, the mundane and the cosmic.

Perhaps the lesson that Plantinga could still teach us, however, is that while adaptability is better served by belief forming structures that are, in large part, reliable, more abstract theories (like metaphysical naturalism) might have little survival value or survival impediment, and so probability of belief forming structures that form beliefs like that being reliable is inscrutable. Even that seems a bit hard to swallow, however, unless we suppose that we have parallel belief forming structures that do not depend on each other: One structure for all the necessary beliefs for survival, and another for the beliefs that are more of a kind of intellectual luxury, abstruse theories about the meaning of life and so on. It strikes me as more economical to think that the same set of belief forming equipment forms beliefs about a whole range of subjects, the mundane and the cosmic.

But if the above is true, and if morality (unlike, say, metaphysical naturalism) is a pervasive kind of belief held by human beings, then it is likely easier to tell a story in terms of evolutionary development where we came to hold moral beliefs because some of them are true than it is to tell a story about how we came to hold moral beliefs that were all false. What’s particularly interesting is that Law himself rejects Plantinga’s argument by arguing that actually the evolutionary development of our belief forming structures generally (although of course nor flawlessly) favours structures that give rise to true beliefs. Perhaps the facts can change as needed in the unholy war against faith!

Glenn Peoples

Previous

Moving

Next

The conditional premise of the moral argument

32 Comments

  1. … Perhaps the facts can change as needed in the unholy war against faith!

    Of course they can; that’s the easiest change in the world.

  2. matt

    I recall Plantinga saying, in his exchange with Law, something to the effect that as far as the EAAN goes we shouldn’t invoke what the world is like, but what the world would be like on Naturalism, so that the Naturalist cannot beg the question by saying “oh, but that’s not what belief forming structures are like!” or “That’s not how evolution works!”

  3. But if the above is true, and if morality (unlike, say, metaphysical naturalism) is a pervasive kind of belief held by human beings, then it is likely easier to tell a story in terms of evolutionary development where we came to hold moral beliefs because some of them are true than it is to tell a story about how we came to hold moral beliefs that were all false

    Why? You lost me on this one. It seems we should expect what Darwin called our “lowly origin” in arguments like this one to provide us with very good tools for getting true answers when truth aligns with survival or reproduction and less so other times.

    It there were moral truths in the universe (I don’t think here are…) then I’m not sure why they’d line up with such advantages.

    (Oh, and if you’ll allow a little pedantry, you have to be careful with talk of “survival strength of the species”. Natural selection acts on individuals, not (really) species.)

  4. David, what I said there follows if my analysis of the relationship between true belief and adaptive behaviour holds. This is because if that relationship does hold, then the development of belief forming processes (and not individual beliefs) is likely to have proceeded in a way that favours reliability, and since moral beliefs are a product of those belief forming structures. hence an account of how moral beliefs came about is made more plausible when couched in a wider theory in which belief forming structures have a tendency to be reliable. That’s the answer to your “why?”

    I realise that “survival of the fittest” acts on individuals, but as given away by your caveat of “(really)”, we both see how it works on species too. A new group of individuals will emerge that has traits that are good for survival, and this new “group” (read: a new stage in the development of a species) will prevail over the older group. But as you said – pedantry. 🙂

  5. I should add: Nothing I’ve said here implies that there just couldn’t be any true accounts of how some beliefs came to be formed even though they aren’t true. That’s much bolder than anything I’ve said. But if what I’ve said here is true, anyone who maintains that some beliefs – beliefs that are fairly pervasively held by human beings – were formed by a belief-forming process brought about by evolution that is likely to be prima facie unreliable, they would have the task of explaining how a separate set of belief forming “equipment” might have come about, a set that basically does the job of forming untrue beliefs (or beliefs that are likely not true).

  6. Sure… but having read your post I can’t really see a good reason why true belief should be, for the most part, adaptive. The idea also seems to be at odds with the reality of the way our brains work. Our ‘belief forming structures’ are pretty terrible at forming true beliefs about probability (see what happens every time someone first runs into the Monty Hall Problem…) and any number of other ideas.

    And, even if it is really an aside, what you described in your comment above is no species-level selection or anything to do with “the survival advantage of the species”. There’s a few folks that would like to return to group selection for these sorts of questions, but talking about the survival of the species is pretty much a red flag to an evolutionary biologist.

  7. “I can’t really see a good reason why true belief should be, for the most part, adaptive.”

    How unexpected! Here I am suggesting that Plantinga’s commendation of “Darwin’s doubt” (as he called it) is likely wrong, while the actual believer in naturalism and evolution is saying otherwise! So now it’s Winter vs Law on whether or not the naturalist should doubt the reliability of his belief forming structures – or at least on whether or not the evolutionary production of those structures should offer us any hope that they’re reliable.

    David, I’ve offered some comments on why I suppose that reliable belief forming structures are likely to be better in terms of promoting survival. You don’t really seem to have much to say about those comments, so I’ll just take this as you registering your disagreement and leaving it at that – which of course is totally fine (if we had to present our explanations of why we disagree with everything we disagree with, we’d never get anything else done).

  8. We crossed the beams on this one:

    they would have the task of explaining how a separate set of belief forming “equipment” might have come about, a set that basically does the job of forming untrue beliefs (or beliefs that are likely not true).

    I’m still scratching my head here. I don’t think Truth has anything to do with evolution. The universe is such that sometimes being right is adaptive (it’s good to recognise your wife, and to have a model of physics that lets you work out what will happen if you jump off that cliff), sometimes it makes no difference (should I stick or switch when i get on Monty Hall’s TV program).

    On naturalism, surely we’d think our brains would develop in a way that they find truth when it matters, and miss it when it doesn’t? I don’t really see how anyone can be a naturalist and think there are moral truths, because our moral instincts certainly seem like something that “works” rather that something that exist outside of our brains.

  9. sam g

    the milgram experiment, and history (khmer rouge, nazis), show that everyday people like you and me are able of torturing and killing people in the right conditions. when discussing the evolution of a sense of morality, i think all we’ve really evolved is social instinct, because if the social setting changes so can our beliefs of right and wrong. we seem to give priority to being accepted by our group, and are able to justify anything to satisfy this. at the most we’ve evolved a belief that there IS right and wrong because it ensures that we are more dedicated to the group if we are right and they are wrong. what that standard might actually be depends entirely on the situation and i have never heard a ‘story’ why we should have evolved an independent moral compass.

    you’ve based this entire piece on your moral realist position because you assume that there is a right and wrong ‘reason’, ie that a moral belief is either right or wrong, or that there is a right and wrong reason to run away from a tiger, which there isn’t. chili evolved it’s burning sensation so that it wouldn’t be eaten by animals, but when an animal came on the scene which both wanted to season it’s food and was capable of cultivation (us) it was this burning sensation which ensured the flourishing of the species. so is the right reason for chili evolving a burning sensation to be eaten or to not be eaten? there is no answer because the question is meaningless. if humans run away from tigers because they think it signals the start of a race, and this ensures the survival to the species, then that is a good enough reason. to ask if it is the ‘right’ reason is a ridiculous question with no answer, just like asking if our moral beliefs are true, or our beliefs about gravity or matter: in evolutionary theory any belief that works is ‘right’ enough.

  10. “I’m still scratching my head here. I don’t think Truth has anything to do with evolution.”

    Yeah, I gathered that. More, precisely (to stay on topic), I assume you mean that you don’t believe that a tendency to form true beliefs has anything to do with the evolutionary fitness of belief forming structures (whereas a naturalist like Law rejects Plantinga’s argument because he thinks the contrary). Again, I’ve said a few words about why I think otherwise (in this blog entry), but you don’t comment on what I say there, so I’m not sure what to add.

  11. SpiritualKiss

    Hi Glenn,

    I had the pleasure of listening to you debate Dr Stephen Law on the Unbelievable show this week. To actually hear Dr Law pressed on a number of issues relating to the Evil God Challenge was of great help, and exposed (for me) something of what is behind Dr Law’s arguments and assumptions, which have not as yet had an airing in other discussions.

    One of the biggest frustrations for me has been Dr Law’s suggestion that most, if not all, of his interlocutors have simply misunderstood his argument – or underestimated it. This has worn a bit thin now, and I have come to think that the force of its implications exist principally in the mind of its creator. Indeed, what is more likely?: that many philosophers such as yourself do not properly grasp the EGC, or worse bear Dr Law ill will?; or that they understand the EGC, yet disagree with its conclusions? – something that totally flabbergasts Dr Law, as he demonstrated in your debate. For me, the evidence is starting to point away from his view of things. Let me say that I have great respect for Dr Law and I hope that you continue to have many fruitful discussions!

    I am looking forward to your written response to EGC which I understand you said you would be publishing soon.

    Regards, SK.

  12. Thanks SpiritualKiss.

    I wouldn’t want anyone to think I had everything my way in that discussion. There were a couple of things that I realised too late in the discussion that I had not adequately conveyed to Dr Law (and perhaps to listeners), largely concerning a discussion he and I have now had in my previous blog entry where I announced that this discussion was going to be happening. But yes, I share your view over the allegedly repetitive misunderstandings that people have about Stephen’s argument – misunderstandings that, he says, irritate him. It’s my view that rather than misunderstandings (alleged misunderstandings that Bill Craig and I reached quite independently, in spite of Stephen’s suggestion that it is spreading like a virus on the internet), these observations may well be observations of assumptions that the evil-god challenge makes without Stephen realising it.

  13. I was once watching an ant toodle around, as ants do. Then, suddenly, a small wasp. much smaller than the ant, started hovering over it — and the ant immediately ran away, in a straight line (in fact, on the edge of a board) rather than in the random loops in which she’d been wandering before, while the wasp pursued her.

    Did “evolution” give the ant “belief forming apparatus and processes”, which, moreover, generally produce true beliefs? Did the ant have any beliefs at all about the wasp and her designs upon the ant, much less true beliefs? For that matter, did the wasp even have designs upon the ant?

  14. Glenn, two points:

    1) I don’t think we can easily declare that either our cognitive faculties reliably produce true beliefs in all the belief-kinds they do produce, or don’t at all. The naturalist does have a plausible reason as to why moral beliefs in particular would not face the same kind of adaptive pressures as beliefs about the natural world. The natural world, with its tigers and floods and cliff-edges, would matter to a creature’s survival, but things of a more metaphysical bent would not (or at least there are a large number of metaphysical beliefs that wouldn’t).

    Indeed, if we limit the kind of moral things we are talking about to moral obligations, then, if these behavioural prescriptions contribute to the survival of the species (and most agree that, were we to follow them, they would) then they will be advantageous, whether or not there REALLY are moral obligations. And this could quite easily occur during the evolution of cognitive faculties which nonetheless produce true beliefs on the whole about the natural world.

    2) Here’s a point in favour of Plantinga’s EAAN (or a point in favour of something in the same spirit). I agree with you that cognitive faculties that reliably produce true beliefs will be more advantageous than cognitive faculties that don’t. BUT that doesn’t at all answer the question of how probable it is, given naturalism, that reliable cognitive faculties will ever actually emerge. To illustrate, it might very well be true that were a certain species of bird to develop an eye on the back of its head then that eye would give it an enormous advantage over its competitors. But that says nothing about the probabilities of the eye actually coming about. It might NEVER arise, even though, were it to, it would be massively beneficial.

    So sure, reliable cognitive faculties would be nice. But why, on naturalism, think we ever evolved to have them?

  15. Ken

    It seems to me there a couple of things here that are very poorly defined and probably used inappropriately. For example – what the hell is a “belief forming process” or “belief forming structures.”

    The later could be seen as the mammalian brain – but probably more precisely as the self-aware human brain. After all “belief” is something dictionaries seem to allocate to humans.

    “Belief” is probably the wrong word to use regarding reaction to tigers (or even regarding moral reactions). After all, as Illion points out ants also react to their environment. In fact all organisms do. All organisms exhibit “biological value” and hence reactions of preservation and protection. The simplest mono-cellular organism does and it has no neuronal structures.

    I don’t think one has to have human self-awareness to react to environmental issues like tigers and gravity (and probably morality). More like instincts. Whether they are false in a specific situations or not is beside the point. If we instinctively react to preserve ourselves when the bushes move, even if there is a tiger there only one percent of the time, we certainly have a desirable adaption. It’s not a matter of “true” beliefs being adaptive – it’s a matter of appropriate instincts and reactions. And these don’t always require any neuronal structure.

    In fact these reactions are handled by the non-conscious parts of the brain in humans anyway. And that is adaptive as reliance on the self-aware, believing, brain would not ensure survival of the individuals. Our conscious reactions are just too slow.

    Regarding morals – I also believe these depend large on intuitions. In fact this is what Hume was pointing out – that morals do not result from reasoning (at least directly).

    With Glenn I might sometimes call myself a “moral realist” – although I dislike terms which people choose to interpret in their own way. However, as David points out, moral instincts don’t occur outside our brains. The realism comes from seeing morality as objectively based – in human nature (we are a social empathetic species) and in the facts of the situation. This implies that in many cases there will be one or more “correct” answers to a moral question. They may not accord with our existing intuitions of “right” and “wrong”. But we are also a learning species and over time the drifting moral zeitgeist can allow for our intuitions to be more a more correct more often.

    I really had to laugh, though at reference to “evolutionary accounts” as to “come up with a story.” It’s a common criticism of some areas like psychology. But scientific rigour can make these accounts so much more closer to reality than just inventing a story. After all, evolutionary science attempts to be a science with all that implies about the reliability and testing of the resulting knowledge.

    My laugh came from Glenn’s own little story when he said “the best way to explain moral facts is by reference to God.” Seems to me that is nothing more than his “belief” and that there are far better ways by far.

  16. “For example – what the hell is a “belief forming process” or “belief forming structures.” ”

    Ken, those terms are stated completely literally. If you know what the words mean, you know what the terms mean. We do in fact form beliefs. Doing so involves a process. That is a belief forming process. There are also bits of us – however complex those bits may be and however many overlapping bits they may involve – that do in fact form beliefs. Those are belief forming structures.

    Saying that these are poorly defined or inappropriately used is like saying that the term “red rubber ball” is ill defined or inappropriately used.

    There’s also no need to wax pejorative about “coming up with a story.” It only means providing an account, nothing more.

  17. “The natural world, with its tigers and floods and cliff-edges, would matter to a creature’s survival, but things of a more metaphysical bent would not (or at least there are a large number of metaphysical beliefs that wouldn’t).”

    Martin, it doesn’t seem to me that this really interacts with the line of argument that I used here. What I tried to explain is that if there is a good chunk of our beliefs whose truth really is likely to matter in evolutionary terms, then reliable belief forming structures per se have an evolutionary edge – and this obviously has relevance for moral beliefs produced by those structures.

  18. … What I tried to explain is that if there is a good chunk of our beliefs whose truth really is likely to matter in evolutionary terms, then reliable belief forming structures per se have an evolutionary edge – and this obviously has relevance for moral beliefs produced by those structures.

    Once again: did the ant I observed fleeing from the wasp have beliefs about the wasp, much less true beliefs about it? Did the wasp have beliefs (and intensions) about the ant?

  19. SpiritualKiss

    “I wouldn’t want anyone to think I had everything my way in that discussion.”

    Of course, however I think it was an imperitive in your discussion to establish common terms or ‘givens’, something which Stephen Law has been more than a little reticent to do elsewhere. For this reason your debate was very important, because I think sacrifices had to be made (ie having it your own way) in order to ‘pin him down’ on some key issues.

    This should (hopefully!) make debating him in future more fruitful.

    Regards, SK.

  20. Ilion – obviously natural selection will only favour (or not favour) belief forming structures that reliably produce true beliefs if a being can actually be said to have said belief forming structures. Otherwise natural selection will just continue (as always) to favour survival conducive behaviour that is not brought about by a decision making process involving belief forming structures (as is probably the case for an ant).

  21. Ken

    Glenn – I obviously know what the words “belief” “forming” “process” and “structures” mean in themselves. But the combinations are by no means as clear as a “red rubber ball”, is it?

    As for providing an account, some precision is important. For example – is belief the right word when it comes to morality? After all, it implies something to do with reason and logic – which we humans just don’t have any time to apply in a moral reactions. And we very rarely apply them elsewhere. Consciousness and self awareness are very much over-rated aspects of the human brain – and it is just as well.

    Perhaps attempts to relate morality to belief are led astray by the “providing of accounts” – the human willingness to tell stories. Our moral reactions are intuitive but when asked we will tell a story to “explain” them. We are rationalising rather than rational.

    I think there is more value in understanding the evolutionary origins of instincts and intuitions than mixing belief, let alone “true” belief, into the story. This can then help explain the reactions of our cousins ants (and even bacteria) as well as humans.

  22. matt

    Ken, would the understanding of the evolutionary origins of instincts and intuitions provide us with explanations of our behavior that were true, or involved more truth than falsehood, by your estimation? If so, would you believe a good evolutionary account of the origins of instincts and intuitions?

  23. Ken

    Matt – not clear how your questions relate to this issue so my answers a simple and short.

    Evolutionary science – as a science – of course tries to produce descriptions approximating objective reality.

    I don’t think the word “belief” is appropriate as scientific knowledge is both closer to the truth and more relative, capable of being changed, than “belief.”

    But do you think talking about “belief” is appropriate when it comes to moral reactions? These are usually so intuitive and rapid that no conscious deliberation could be involved. or do you see “belief” as an intuition?

  24. I don’t think the word “belief” is appropriate as scientific knowledge is both closer to the truth and more relative, capable of being changed, than “belief.”

    *eyeroll* (what more need be said?)

  25. Again, I’ve said a few words about why I think otherwise (in this blog entry), but you don’t comment on what I say there, so I’m not sure what to add.

    Well, it’s not a case of not engaging your arguments, so much as their being not arguments that I can find.


    What is needed is some fairly fundamental beliefs about what will happen when paul interacts with the world in a certain way: beliefs about the general effects of gravity (even if Paul doesn’t know it by that name, or much in detail about what it does), beliefs about the actions and reactions between his body and the environment. Obviously Paul wouldn’t start running if he believed that putting his foot on the ground with considerable pressure would cause it to explode (to use an equally silly example)

    Why should we think that brains which are good at being right about physcial things which directly relate to their survival are also good about morality or any number of other things that don’t. As I say, we know for a fact that our brains are very bad at finding the right answer to all sorts of things (cf. casinos).

  26. John

    Three immutable moral laws which have always been intrinsic in Reality.

    The negative exploitation and killing of human beings by human beings violates the heart of one and all.

    The negative exploitation and killing of non-human beings by human beings violates the heart in one and all.

    The negative exploitation, and progressive degradation, and potential destruction of the fundamental order of the natural environment on which all of Earth-life depends violates the heart and directly threatens the life of one and all.

    Also: The fiction of separateness, and the thus simultaneous denial of the universal characteristic of prior unity, is a mind-based illusion, a lie, a terribly deluding force, and a profoundly and darkly negative act.

    The individual and collective denial, and active refusal, of the Universal Condition and Intrinsic Law of prior unity is the root and substance of a perpetual universal crime against humanity, performed by every one and all of humankind itself.
    The active denial of the condition of prior unity is the worst cancer in the universe. It is the worst sickness. It is the most horrific disease. Its implications cover the entirety of everyone’s life. The world is filled with its symptoms and reeks with its torments and potentials, coming from all directions, most of which people cannot even see.

  27. John Quin

    I still have remaining questions over this issue. It seems to me that we can grant for the sake of argument that Evolution can produce reliable beliefs including “moral” ones (The reason for the scare quotes hopefully will be obvious). However how do we know if the ‘ought’ that is produced is a rational one or a moral one?

    So for instance it seems reasonable that Evolution would produce beliefs about the nurture requirements of other humans and that these facts would be true. True in so much as it is true that if you push a human off a cliff it is true that it will be detrimental to that person.

    So to return to Law’s argument Evolution will provide us with a ‘rational ought’ irrespective of whether there is a ‘moral ought’. It seems to boil down to how can you justify the value of people and in so doing change the rational ought into a moral one.

    As far as I can see, if with the moral argument you take it as a properly basic belief that moral oughts exist then it is the atheist who is begging the question by asserting naturalism in an effort to undermine the belief (either by EAAN style reasoning or by asserting that the ought is rational). If however you start from agnosticism regarding whether the ‘ought’ is moral or rational then it is the theist who is begging the question by asserting that God makes the ‘ought’ moral.

    So my question is, What is the likelihood that we can get people to accept properly basic beliefs as a starting point rather than agnosticism?

  28. “However how do we know if the ‘ought’ that is produced is a rational one or a moral one?”

    John, if we know the difference between the two, then we can figure out what kind we have on our hands. And showing that it’s detrimental to a person to push them off a cliff doesn’t show that there’s a rational “ought” involved in not doing it. In fact, if we want them dead (and we think we can get away with doing the honours ourselves) then rationally, we ought to kill them and pushing them off a cliff might be ideal.

    But if we think that although it is not a means to an end we necessarily want, we still have an overriding duty to do something, it’s going to start looking more and more like a moral ought. In fact, even if on reflection we find ourselves thinking “whether I wanted this outcome or not, this would still be what I should so,” then we’ve got what we could call a moral sense, and we’re not likely to be dealing with a mere rational ought.

  29. John Quin

    Thanks for taking the time to reply Glenn.
    I have some follow up questions( if you have the time). Now perhaps I’m missing something or articulating my ideas poorly.

    When I have been talking about rational oughts I’m talking about the idea that anytime we think we have a moral ought on our hands it is possible that the feeling that it is a moral ought is just evolutionary programming. In that case it would be a really just a rational ought where the rational objective is to promote something evolution has coded into us.

    So in short evolution can provide an illusion of moral oughts when in reality (give metaphyical naturalism) it is just speciesism that is hardwired into us.

    So how do we know if they are real or illusions. It seems to fall back on the starting point of agnosticism or properly basic beliefs.

    Am I making any sense?

  30. When I have been talking about rational oughts I’m talking about the idea that anytime we think we have a moral ought on our hands it is possible that the feeling that it is a moral ought is just evolutionary programming. In that case it would be a really just a rational ought where the rational objective is to promote something evolution has coded into us.

    OK, I think I know what you mean, John. A “rational ought” doesn’t come into play then. A rational ought is a means-to-an-end ought. It’s where you say “If you want X, then you ought to do Y.” Just believing something because you’re hardwired to do so would be a non-rational process.

    But in any event – it seems that what you’re asking is: How do we know that we’ve actually stumbled onto a moral fact, when in reality we might have just ended up with a belief because evolution hardwired us to hold it, and not because it reflects a fact.

    I would reply as follows: While it’s plausible that evolutionary development has left us with a tendency to reliably form beliefs (that’s part of what I say in this blog entry), that’s not at all like saying that evolutionary development left us with any particular beliefs (e.g. the belief that torturing people is objectively wrong).

    As an example, evolution left you with a belief forming process tied to your auditory senses, and the evolutionary development of those senses determined how they would work. But that shouldn’t imply for a moment that when you hear a sound, you should raise the sceptical question of “Now, can I REALLY hear that sound, or am I just wired to think so because evolutionary development left me with these senses that are good for my survival?” No, what I’ve been saying here is that reliable belief forming structures are themselves good for our survival, which means that if we have a belief because of the normal functioning of our belief forming processes, then there’s a good chance that it’s true – not necessarily that we hold that particular belief because we were wired to hold that particular belief. We weren’t wired to hold that particular belief at all, we were just wired to have belief forming structures that work in a certain way.

    Within the context of what this blog is about, does that go some way towards answering the question?

  31. John Quin

    Thanks Glenn

    I think I’ve monopolised enough of your time for the moment.
    At least the part of my question that relates to this post has been answered.
    🙂

  32. Cody

    “Perhaps the lesson that Plantinga could still teach us, however, is that while adaptability is better served by belief forming structures that are, in large part, reliable, more abstract theories (like metaphysical naturalism) might have little survival value or survival impediment, and so probability of belief forming structures that form beliefs like that being reliable is inscrutable. ”

    Pruss (incredibly briefly) mentions something similar here:
    http://alexanderpruss.blogspot.com/2013/08/a-variant-on-plantinga-evolutionary.html

    And this is, of course, similar to what Plantinga dubs the weaker form of the EAAN in “Where the Conflict Really Lies”. In any case, I think your worry that

    “Even that seems a bit hard to swallow, however, unless we suppose that we have parallel belief forming structures that do not depend on each other: One structure for all the necessary beliefs for survival, and another for the beliefs that are more of a kind of intellectual luxury, abstruse theories about the meaning of life and so on. It strikes me as more economical to think in terms of our belief forming equipment forming beliefs about a whole range of subjects.”

    isn’t damning to the weak EAAN. For it seems more plausible to suppose that the belief forming faculties of our standard beliefs wouldn’t necessarily (or even probably) entail reliable metaphysical beliefs. Since the faculties which E would be selecting for would only be interested in true standard beliefs (to put roughly), there’s no reason to suppose that those faculties also happen to produce true metaphysical beliefs (granting, of course, your “economical” point). For the metaphysical beliefs aren’t being (can’t be?) selected for as they don’t have an effect on everyday life; it would be a happy coincidence if the faculty responsible for producing standard beliefs also produced reliable metaphysical beliefs.

    Finally, your troubles seem to be with the primitive form of the EAAN (the one advocated for from WPF-“Naturalism Defeated?”). So I’m curious if (1) you’re familiar with the latest formation of the EAAN (as seen in, e.g. “Where the Conflict…” and “Content and Natural Selection”) and (2) if you still think it to be unsound?

Powered by WordPress & Theme by Anders Norén