Qingu, please take a note of what I was talking about in saying that mere replication doesn't equal consciousness. The reference was to the creation of consciousness in artificial intelligence because as I understand the proposal put forth, it was suggested that to replicate the way a brain works would produce intelligence. Since the nervous system of a starfish generally works the same as that of a human being wouldn't they also therefore have consciousness based on this way of thinking. You say they don't, so perhaps you don't think that it is so easy to create consciousness either by the simple replication of the way humans process information. Or do you?
A human's brain is vastly more complex than a starfish's nervous system—in the same way that my current computer's processor is vastly more complex than the vacuum-tube-based processors of the 1950's.
But intelligence is just a matter of complexity, and I don't see why the physical substrate of the computing machine (neurons in chemical solution vs. semiconductors and logic gates) matters at all.
We have already invented machines with more intelligence than invertebrate animals. Certainly an autonomous robot with the capability to understand and react to speech commands has more intelligence than, for example, a starfish. Interacting with Google is often more productive than interacting with human reference sources—this is because of Google's massively complex and constantly evolving AI. The difference between starfish-level AI and human-level AI is not fundamental, it's simply a matter of complexity.
As to whether or not other animals have consciousness, I did say " I don't think a starfish's level of awareness meets any of these criteria [for having consciousness], nor to a whole lot of other animals." But I didn't say that none did. But consciousness is more than just being instinctively aware of what one needs to survive. Look at the discussion I provided from a number of different philosophers (none of them positing religious ideas, I might add) and note that they referred to the need to form concepts. Almost all animals have an awareness of their surrounds, they must in order to react to it, and they must react in order to survive. But to form concepts, to interpret their experiences. This seems to be limited to those that we classify as animals of higher orders. Indeed, I suspect that is one of the reasons we speak of them as being of higher order. Now exactly where does one draw the line? I don't know. I was only showing that the presence of intelligence does not in and of itself produce consciousness. Presumably some place between a starfish and a human being, but since you are the one ordering creatures, I'll let you draw the line. It doesn't change my basic point which is that there is a line. And I suspect that artificial intelligence may one day be able to mimic much of human thought, but I sincerely doubt that it will include consciousness as the philosophers I mentioned discussed it.
I agree with much of what you say here. I see consciousness as subjective experience, and I am not sure that a starfish has that; a sponge certainly doesn't, let alone plants.
However, I disagree that the "line" is as hard-and-fast as you characterize. Here is how I see the nature of consciousness. All living organisms react to stimuli. Plants grow towards sunlight—so even they have some inner mechanism that
categorizes stimuli into "grow/not-grow" categories. The point of a nervous system is to better categorize stimuli. Starfish react in more diverse ways than plants. Invertebrates like ants—whose nervous systems have nearly evolved into centralized brains—have even more "categories"—good food/bad food; friend/enemy; follow smell/stay away.
As nervous systems and brains evolve, it's reasonable to conclude that the brain categorizes internal states of the body as well as external stimuli—for example, hungry/not hungry; pain/pleasure.
So here is what I think: what we call consciousness—the subjective experience of existing that we humans and most other vertebrates, at least, have—is simply the brain's way of categorizing its own internal state. The effect is something like putting two mirrors facing each other—or, as Douglas Hofstader argues in
this book, (which I would highly recommend!) a "strange loop."
As brains evolve and become more and more capable of such internal, self-referential categorization, consciousness also evolves, becoming more and more complex. But I'm not sure we can draw any hard and fast line about when a sufficient complexity of nervous-system categorization arises to produce consciousness (though, certainly, a nervous system seems essential at least).
Why you brought the discussion of a soul into the conversation in responding to me, I don't know. You will note that I never mentioned it. Yes I do see the soul as something one either has or doesn't have. However, I don't see the soul as being equated with consciousness. As you said, consciousness has an emergent property to it. So, why I are disputing me with regard to a point I never even made?
Fair enough—"soul" is certainly a loaded term (I tend to equate it functionally with consciousness), and as it turns out it looks like you mostly agree with me on the nature of consciousness anyway. Though now I am curious as to how you think the soul interacts with consciousness.
As far as God being a moral yardstick, what moral yardstick do you suggest? Will it be something that is as emergent as one's sense of consciousness? If so, then it is relative and my yardstick becomes as good as yours. If it isn't emergent, then it either exists or it doesn't exist. And if no moral yardstick actually exists, then nothing is either good nor bad, it just is. If it does exist, then there is a standard. We may not always understand the standard, interpret it well, or even accept it as one we agree with, but there is one just the same.
First of all, you are simply declaring Yahweh is a moral yardstick by fiat. I could just as easily declare that the god of any religion or video game is a moral yardstick. I could even declare "Be excellent to each other, and party on dudes!" is the objective moral yardstick, from
Bill and Ted's Excellent Adventure. Just because an ancient book, videogame, or 80's movie says something is a moral yardstick does not make it so.
Secondly, I notice that you did not answer my question. If Yahweh of the Bible is your moral yardstick, do you think genocide and slavery should be permissible? Because that is exactly what the Bible says. If you don't think they should be permissible, why not? I think this is quite important—because I don't think the Bible and its moral law is your yardstick at all. I think your moral yardstick is largely the same as mine—post enlightenment, humanist morality, which opposes slavery, genocide, female oppression, and values happiness and freedom—and that you interpret the Bible to fit this external moral view.
Thirdly, to answer your question: I think morals are, in one sense, certainly relative: Hitler didn't think he was evil, he thought he was right. Nearly everyone disagrees on what is good and what is wrong. You can hold some book (or videogame or 80's movie) up and saying "this is actually what's good and bad," and someone else will hold another book up and say the same thing with opposite content. I don't think you disagree.
However: I do think that moral systems are selected for over time. Take slavery. For most of history, slavery was perfectly acceptable in almost every culture. Today, it is almost universally reviled. Why? At risk of over-simplifying, slave-owning societies could not compete with non-slave-owning societies—and the world's moral system progressed.
Now, I think it's important to note that morality does not exist in a vacuum. It is intimately tied into economic and technological systems. Part of the reason slavery died out is because of the emergence of a moral system that values human empathy and equality, but perhaps a bigger part is the emergence of industrial economies and technologies that made slavery obselete. In America, the slave-owning South could not compete economically with the industrial North (which is why I am skeptical that the Civil War really needed to be fought); in Britain, slavery gradually died out on its own, replaced by the Industrial Revolution. This revolution brought its own problems (moral and economic), but we've progressed beyond them as well.
Societies are constantly evolving as new technologies evolve. This drives the nature of economy, which in turn drives the nature of morality. As I said before, I think there are some bumps and rockiness, but on the whole, I think the world has been morally improving over time. I would rather live today than at any other period in history. At the same time, I imagine there is plenty about our society that our future descendants will think is barbaric and immoral (if I had to hazard a guess, I'd say the area of animal rights is one such example).
So that's my "yardstick," though it hardly functions like your yardstick. I think superior moral systems are those which survive over time. And because of this, I think it's important to try to get a sense—by looking at history, and by examining human psychology—about what kind of moral systems tend to work out the best. I think history shows that moral systems that value empathy and happiness, that allow lots of freedoms, that encourage scientific progress, and are relatively inclusive, tend to flourish and remain stable. Ancient Greece and Rome, the Golden Age Caliphates, and much of modern Western culture have been marked by these qualities (well, relative to other civilizations contemporary to them), and they are the ones that have historically flourished.
And, yes, I was playing with words at the end. Everyone has to have a little fun sometime, don't you think? Playing word games is not something done souly by atheists.
Ugh. 