Feeds:
Posts
Comments

Archive for the ‘Radio Open Source Conversations’ Category

Part 1: “A solipsism as big as all existence”

Lazare’s piece, like so much of this theism/nontheism discourse, is grounded in the longstanding, conceptually buttressed but objectively inaccurate premise that humans exist in the universe but are not (necessarily) the universe itself – the universe aware of itself. He seems to discern this though, although I can’t grasp how clearly, in the review’s penultimate sentence:
“In short, humanity creates meaning for itself by liberating itself so that it can fulfill itself. This is also a solipsism, but one as big as all existence.”

There’s an awful lot of very important meaning in those 28 words. I’ll leave Lazare’s use of ‘meaning’ aside for the moment, because I think that this – “liberating itself so that it can fulfill itself” – begs for a critical analysis.
What is humanity ‘liberating itself’ from? What is restraining humanity from ‘fulfillment’?
The material world?
Is this what’s meant by ‘transcendence’? A perceived need to escape the world we seem to be ‘born into, naked and shivering’? (Who wrote that? A poet? Shakespeare? Anyone know?)

If so, I strongly suspect that this is the real kernel of religion, and here’s why:
Science informs us that the atoms comprising our corporeal molecules can be split into smaller atoms or fused into larger ones, but never genuinely destroyed. These aggregations of energy (subatomic energy) are effectively “immortal”. Therefore what I call “my body” is comprised of energy packets that originated in the Big Bang. These energy packets of me-stuff were once the innards of stars before stellar fusion clumped them into the heavier elements that eventually gravitated into the third rock from the star nearest my typing fingers. After abiogenesis (whether this occurred beside deep-sea volcanic vents or in sunlight near the shores of the planet’s early oceans is a question far from settled) the me-stuff of rock became proto-bacterial life. Then, later yet—after inestimable millions of generations of evolution and flat-out gene-swapping (yes, bacteria can do this)—this microbial patterning of Big-Bang-Stuff became even more complex life before it became me. And since I must consume large quantities of complex life to replenish my me-stuff, I continue to convert biologically re-patterned Big-Bang-Stuff into me-stuff.

So what am “I”, then?
Evidently: ancient stardust as a biospherically engendered, locally mobile, (potentially) self-replicating pattern of the Big Bang in ongoing evolution. orlox incisively reconceptualizes it as ‘process history’:

If only process is fundamental where does the substance come from? If substance is a property of a second level of organization (so atoms have substance but electrons are just process) then atoms should be considered fundamental.

If process is fundamental, then all is process. An entity is a process history, not a discrete substance, even if defined as somehow more than the sum of its constituents.

Wonderfully articulated (although the italics are mine). Yet I’m something more than merely the Big Bang in its ongoing evolution (as if that isn’t an awesome enough realization!) – I’m also a self-aware facet of the Big Bang in ongoing evolution. We humans, in short, are something vastly greater than the mere consciousnesses that seems to “inhabit” our brains.

And this ain’t theology or religion, folks – it’s scientific deduction. And an inevitable scientific deduction: any sufficiently intelligent species of this biosphere or of biospheres elsewhere in the cosmos will likely eventually deduce this. It’s a ‘truth’ out there awaiting discovery: this universe evolves its energies and processes into stars, yes, but into intelligent biological patterns as well (we’re that theory’s inescapable supporting evidence). We don’t live “in” the universe as if it were a house, or a country, or even a potentially domesticable eco-system: it’s a stellar eco-system already that evolves itself into life – into us. We’re the stellar eco-system becoming aware of itself. We’re not “in” it, we are it.
We are “a solipsism as big as all existence”, although I’m probably using that phrase very differently from Lazare’s intention. (And this might be the only instance I can imagine of the concept “solipsism” being more genuinely useful than frivolous). The human species is the universe-as-solipsism, or, at the very least, the biosphere-as-solipsism.

Yet our inherited common language concepts don’t easily represent this realization. We much more easily say very odd things like, “I have a body” rather than the more accurate, “I am this body”. We seem to intuit that we inhabit our bodies, as if our individual lives are rentals on our way toward home-ownership in an entirely spurious hereafter.

And maybe it’s worse than merely linguistic: perhaps we’re consigned, by ancestral evolution-in-wild-environment of our very specialized primate-to-human-being consciousness, to perceive ourselves as ‘within’ the world but not exactly of the world. We seem hopelessly stuck with this sense of separateness – even though we’re not merely the atoms, measurable in kilograms or pounds, of our bodies but the entire universe aware of itself, as biospherically engendered, locally mobile ‘process histories’.

But are we hopelessly stuck? Must we eternally experience our lives from ‘in here’ looking ‘out there’? Might we ever learn to easily perceive our unity with the rest of the cosmos, or must we instead remain intuitively prey to the feeling of separation—which is the probable source of the ancient concept called ‘soul’?

Well, I can’t pretend to assure you that we can overcome the ‘intuitive perception’ part of that problem—maybe we can, maybe not. I can’t offer any predictions on that. But I don’t see why we can’t begin to more accurately conceptualize our inescapable unity with ongoing creation. It will require, as orlox has already done, rethinking ‘things’ – nouns – into events or actions – verbs. Or, perhaps creating a specialized hybrid of verb and noun that represents living patterns as processes instead of entities.

It will require another even more important revisioning too: a sense that we’re not small and vulnerable (although it’s intuitive to think that we are), but that our smallness and vulnerability is a kind of illusion. We are the universe aware of itself. We’re the whole damned thingall of the events that ever happened – even those we’re not yet aware of. Our ignorance of the events beyond our personal perceptions shouldn’t diminish us or our senses of self worth, however: we are the universe examining itself – and such examinations, especially by those just beginning to learn how to examine, cannot be concluded in an instant, or in a lifetime, or even in a millennium. Which implies then that in this phase of the universe-examining-itself, each of us is an invaluable examiner.

Think of a sphere: where is its center? Every point on it is its center, right? That’s the sort of metaphor necessary for the conceptualization we need to cleverly incorporate into our future verbiage – into our grammar of unity. (And no, I can’t claim to have worked up such any such cleverness myself – but I’m hardly a philosopher. Someone else, someone much more clever than me, will eventually do it.) Each of us is the universe’s center: none of us are more important than any other.

It’s hardly unreasonable to characterize us a ‘young species’: originating about 200,000 years ago, and as an evolution of the line(s) of hominids that diverged from the ancestors of contemporary apes about 6 or so million years ago. Grebes, in contrast, have a fossil record going back approximately 24 million years. Permit me then this metaphor: we’re an ‘adolescent’ species, growing from childlike naïveté toward a more richly experienced and knowledgeable – but still youthful – maturity.

And how do children learn? Via belief: by accepting as ‘the truth’ whatever their parents and elders tell them. Yet if this epistemology—rote-learning and careful imitation—were our only option, we’d still be knapping Neolithic flint and stitching sinew though pelts to clothe us. Human learning and cognition transcends the limitations of belief and faithful imitation: we innovate, not only technologically but, and much more crucially, conceptually. This symbiotic capacity to innovate technologically and to innovatively conceptualize allowed our ancestors to evolve beyond Neolithic culture, to evolve in turn beyond the Bronze Age, and, with breathtaking rapidity, to evolve into this ultra-technological and ultra-inquisitive culture that allows us to question and converse with one another from the privacy of our personal dens. We are a species unlike any to have preceded us (that we know of); and yet we are even more than that: we are this local neighborhood of the universe so hyper-aware of itself that we can discern the patterns of universal energy that comprise our individual selves and our greater self the cosmos.

We didn’t discover this new level of knowledge via belief. In our recent (metaphoric) puberty, one of us, named Augustine, wrote:

There is another form of temptation, even more fraught with danger. This is the disease of curiosity. It is this which drives us to try and discover the secrets of nature, those secrets which are beyond our understanding, which can avail us nothing and which man should not wish to learn. – Google Book Search

Shocked? If not, I suggest you should be. This sentiment by Augustine (a ‘Saint’, no less) is the philosophy of Dark Ages – and of faith.
Contrast that with this:

It is an essential part of the scientific enterprise to admit ignorance, even to exult in ignorance as a challenge to future conquests. As my friend Matt Ridley has written, ‘Most scientists are bored by what they have already discovered. It is ignorance that drives them on.’ Mystics exult in mystery and want it to stay mysterious. Scientists exult in mystery for another reason: it gives them something to do. More generally…one of the truly bad effects of religion is that it teaches us that it is a virtue to be satisfied with not understanding.
—Dawkins, The God Delusion, pp.125-6

Thank goodness for human curiosity. (But what a pity Augustine and his ilk influenced our ancestors for so many disease-ridden centuries, effectively obliterating the proto-science of the classical world and pushing the development of the scientific method a millennium and a half further into the future than it needed to be.)

So, I’m suggesting that humans, especially in the past couple of centuries, have uncovered, in quantities of thousands or even millions, of what had previously been “secrets and mysteries” of the cosmos, and that these discoveries are collectively forming a very different paradigm from this:
Lonely souls inhabiting fragile, clawless, fangless, unarmored bodies, bodies too often prey to disease, and yet made to endure a harsh and cruel world, spending a lifetime in a kind of post-womb exile before earning—or failing to earn—a rescue back into a state of perfect love, nurturance, and protection.
Let’s call it the ‘exiled-child’ paradigm.

Conversely, human science (the biosphere examining itself) – inspired by a symbiosis of curiosity, adventurousness, and imagination, has uncovered is a markedly different paradigm:
We are comprised of indestructible – “immortal” – cosmic energy, and we cannot be removed from our cosmic environment. We are not separate from the cosmos but the cosmos itself. We live no longer in the wild environment of our early human forebears but in a domesticated one: a world wherein we could (and hopefully will after maturing a bit longer) eradicate poverty and its associated sufferings.

Human science has found not only no sign of supernatural agency, it has found, progressively, discovery by discovery, no need for such an extra-universal agency. Einstein found belief in a personal god naïve; while the implications of more recent discoveries find no compelling need for even a Spinozan/deism kind of ‘celestial watchmaker’: an impersonal god who set the universe up, and then apparently vanished (and to where?!).

Yet human science isn’t just a bunch of nerds scheming to overthrow morality and uproot venerated ancient faiths; human science is the Earth itself investigating its cosmic neighborhood and its own holistic processes and constituents. This isn’t necessarily a ‘cosmic purpose’ though. I’d characterize it instead as the inevitable outcome of any planet’s biosphere, after evolving itself into environments (because life itself creates its own environmental conditions) that allow the emergence of a sufficiently inquisitive intelligence.

The ‘exiled-child’ paradigm might be plausible if we really were separate from the cosmos, but the Earth-as-human-science hasn’t found any evidence to support that. Then why do we feel that way intuitively? Probably as a consequence of our ancestors having evolved in a wild and hazardous environment instead of in the sort of domesticated one that has allowed our minds the peace and leisure to use our brains for much, much more than our ancestors had time to do. Fifty-thousand or more years ago, feeling holistically immortal as the living planet just might have made you-the-individual somewhat less suited to survive the various predators, animal and hominid, that our ancestors probably had to contend with.
But we don’t face threat of predation any longer – and haven’t for most if not all of the centuries since the Bronze Age at least. Which means we have found, at long last, an opportunity to evolve a sense of unity with the universe – as Zen Buddhists have strived to develop on an individual basis for more than a millennium. (And Buddhism, despite its curious, almost incongruous concerns with reincarnation, is a nontheist ‘religion’ – I put it in quotes because I don’t think of it as a religion as much as a philosophy.)

We can feel whole with the cosmos instead of feeling alien, vulnerable, and forlorn. We needn’t feel any longer that we are on trial for a berth in an afterlife: a manufactured soul undergoing a perverse sort of moral examination. We can instead begin to feel that we are the examiners – because we are!

Who wouldn’t welcome that?

Ah, actually, some folks would: the faithful. Adherents of the exiled-child paradigm.

What’s the solution? Education – but not dogmatic education. Lessons in not what to think but how to think: lessons in logical fallacy and in the scientific method; lessons in comparative religion and in all manner of ancient mythology alongside world literature. Because, as a guest on a recent ROS program said, we are “the storytelling species.” Education in the metaphor underlying our languages, too. (We chronically conflate metaphor with the reality the metaphor tries to illuminate by comparison, causing hopeless entanglements of semantics and keeping people from understanding one another’s points of view. In science, for example, Dawkins’s “meme= gene” metaphor is taken literally instead of metaphorically, and as a result folks are wasting years trying to assess “how memes operate” – as if metaphors are real!!!)

I’m running out of time, so I’ll wrap up hastily, post this, and link to it on ROS.

What I question fundamentally about Lazare’s review is his premise that “…humanity creates meaning for itself by liberating itself so that it can fulfill itself. ”
I’ll be referring to this objection, when I write part 2 of this response later (and it will be much shorter than this). I suggest the only liberation necessary for humans to achieve a more fulfilling existence is liberation from the exiled-child paradigm. We’re rapidly outgrowing it. Manning on ROS wrote:

Are modern religions essentially adult versions of the story of Santa Claus? If so, does the editor’s comment that there is a Santa Claus tell us something more profound than how to comfort a child?

I would suggest that we as a species are (metaphorically) in the phase when some of us have deduced that Santa doesn’t exist except as a character of myth. Others of us feel a powerful need to deny this. Einstein wrote: “The idea of a personal God is quite alien to me and even seems naïve.”
I agree – but I would NOT say the same for an interpersonal god. Why not honor one another as gods and goddesses? Don’t the monotheisms imply that we should?

The energy and matter comprising us is earth and water, powered by sunlight and freshened by air. We each are ancient stardust made conscious by the seeming miracles that follow Earth’s absorption of sunlight. Therefore:
Revere all other humans: they are you in different bodies. They see, on your behalf, what you cannot. On your behalf they hear what you cannot. On your behalf they smell what you cannot. On your behalf they taste what you cannot.
And on your behalf they feel what you cannot.
Revere all other creatures: they may not ponder as profoundly as you, but they feel just as deeply.
Revere the plants: they feed you, whether directly or through animals that consume them, that you consume in turn.
Revere the mountains and the valleys, the forests and the deserts, the wild steppes and the tamed plains. Revere all water, no matter its amount. Earth and water combine with sunlight to make you and all other life.
Consider carefully – with empathy as your guide – the effect on other creatures and people any action you make.
After empathy guides you, choose the action that harms the fewest other sentient creatures.
A Cosmic Perspective

Advertisements

Read Full Post »

First, the “old school” understanding of evolution (which I do not plause, empiricate, believe, or buy “lock, stock, and barrel”, because of the “new school” understanding that doesn’t conflate life with machinery, and that discerns at least as much cooperation/collaboration as competition in the evolutionary dynamic).

From Michael Shermer’s Why Darwin Matters: The Case Against Intelligent Design a(n oversimplified) summation of evolutionary theory’s 5 constituent descriptions:
(quote)
1. Evolution: Organisms change through time. Both the fossil record of life’s history and nature today document and reveal this change.
2. Descent with modification: evolution proceeds through the branching of common descent. As every parent and child knows, offspring are similar to but not exact replicas of their parents, producing the necessary variation that allows adaptation to the ever-changing environment.
3. Gradualism: all this change is slow, steady, and stately. Given enough time, small changes can accumulate into large changes that create new species; that is, macroevolution is the cumulative effect of microevolution.
4. Multiplication: Evolution does not just produce new species; it produces an increasing number of new species.
5. Natural selection: Evolutionary change is not haphazard and random; it follows a selective process. Codiscovered by Darwin and the naturalist Alfred Russell Wallace, natural selection operates under five rules:
A. Populations tend to increase indefinitely in a geometric ration: 2, 4, 8, 16, 32, 64, 128, 256, 512, 1024…
B. In a natural environment, however, population numbers must stabilize at a certain level. The population cannot increase to infinity—the earth is just not big enough.
C. Therefore, there must be a “struggle for existence.” Not all the organisms produced can survive.
D. There is variation in every species.
E. Therefore, in the struggle for existence, those individuals with variations that are better adapted to the environment leave behind more offspring than individuals less well adapted. This is known as differential reproductive success.

(unquote, Shermer, pp6-7)

Now, evolution’s “New School” understanding, from Elisabet Sahtouris’s EarthDance: Living Systems in Evolution

(quote)
Scientists quickly made Darwinian evolution fit the idea of nature-as-mechanism by regarding creatures more or less as wheels fitting cogs of other wheels in the great clockwork of nature. Some wheels just happened to be made better than other by lucky mechanical accidents during their replacement, or reproduction. The idea of natural competition leading to the survival of the fittest appealed to men who were obsessed with the new social structure of industrial capitalism.

With the advent of genetics, accidents of birth were discovered to reflect changes in genes. When the structure of DNA and its copy process were understood, these accidents were believed to occur on a random basis—meaning without any pattern—as DNA copied itself or was damaged. Most biologists today still see accidents, now known to occur in DNA, as the only source of natural variation, despite growing evidence that such accidents are detected and repaired very quickly. (italicized emphasis Nick’s)

Ever since Darwin, our general view of evolution has been of a battle of among individual creatures pitted in competition for inadequate food supplies. Only now are we in a position to understand the Earth as a whole—a single geobiological dance woven of many changing dancers in a complex pattern of interaction and mutual transformation.

Competition and cooperation can both be seen within and among the species as they improvise and evolve, unbalance and rebalance the dance. Consider again the spiraling pattern described as unity—>individuation—>competition—>conflict—>negotiation—>resolution—>cooperation—>new levels of unity, and so on. Note that competition and cooperation are different phases of the cycle. Young species tend to grab territory and resources, maximizing the numbers of their offspring to spread themselves where they can. As species encounter one another, conflict develops in the competition for space and resources. Eventually negotiations leading to cooperation prove useful to competing species and they reach the higher level of unity, as we saw happening in the transformation of monera into protists.
(unquote, pp.106-7)

That transformation she cites deserves its own citation here, too:
(quote)
The idea of cell symbiosis—the origin of eukaryotes as prokaryotes living together in cooperatives—had been proposed simultaneously by a German, an American, and a Russian biologist around the turn of the (20th) century. All had noticed that the photosynthesizing choloroplasts—meaning ‘green producers’—in the cells of plants resembled bluegreen bacteria. The Russian, K.S. Mereschovsky, suggested that other ancient bacteria had evolved into other cell parts. But biologists, who were trained to see living things as put together from mechanical parts, could not see cell parts as creatures in themselves.

Thus the symbiosis theory was ignored until Lynn Margulis an American microbiologist who became James Lovelock’s partner in developing the Gaia hypothesis, revived it and produced a great deal of evidence to support it.

After much work, Margulis and others have shown that these energy-producing cell parts really are descendants of the ancient breather bacteria that came to live inside larger prokaryote cells, cooperating in building the first eukaryote cells. Luckily, teams of biologists working to unravel the ancient mysteries of cell symbiosis have found many clues in the behavior of today’s bacteria. Rather vicious breathers can still be found drilling their way into other bacteria to reproduce there and eat the host bacteria from the inside. In the Tennessee laboratory of Kwang Jeon, protist hosts so invaded learned to tolerate them and then to cooperate with the invaders in a mutually dependent relationship that brought about a new kind of creature. Surprisingly, this replay of ancient evolution shift from outright aggression to full cooperation happened in only a few years time.
(pp.100-1)

*****
Margulis’s discovery, that eukaryote protists evolved cooperatively internal schemes to overcome the problems caused by competition among prokaryote bacteria, was almost as much a shock to the world of science as was the Gaia hypothesis itself. Besides showing that cell ‘mechanisms’ such as mitochondria are creatures in their own right, she was suggesting that harmonious cooperation played a big role in evolution. This ran counter to the beliefs stemming from Darwin’s work, adopted by scientists in western countries, that evolution was just a survival race driven by competition.
(pp.102)

*****
It took a century and more after Darwin’s theory was published for us to understand that environments are not ready-made places that force living being to adapt to them, but ecosystems created by living things for living things. All living things belonging to an ecosystem, from tiny bacteria to the largest plants and animals, are constantly at work balancing their lives with one another as they transform and recycle the materials of the Earth’s crust.

Darwin, along with Lamarck, and Wallace, were modern pioneers in showing us that species evolve and attempting to explain how this could happen. Their theories were a great step forward for science, since religion had put an end to all theorizing about evolution since a few ancient Greek philosophers, such as Anaximander, had thought about it. Anaximander had said that everything forming in nature incurs a debt, which it must repay by dissolving so other things could form—a marvelous description of evolution through recycling in a single sentence!

Now we can see that Darwinism—and its updated version, neo-Darwinism—is a misleading way of seeing nature. The notion of the separateness of each creature, competing with others in its struggle to survive, had well described, and justified, English and American societies’ new forms of competitive and exploitative industrial production in a world of scarcity. But now we are beginning to understand that humans must learn to harmonize our ways with those of the rest of nature instead of exploiting it and one another ruthlessly. The social view of individual people pitted against one another in such struggle makes little more sense as an ideal than the notion that our bodies’ cells are competing with one another to survive in hostile bodies. It is simply no longer useful to see ourselves as forced to compete with one another to survive in a hostile society, surrounded by hostile nature.

The point is that we do see ourselves in such competition, not because this is inevitable, but because Western science developed in close harmony with social and political traditions that welcomed these ideas. The Darwinian theory of evolution was applied to forming a society, a social system, designed in accord with, and justified by, the Darwinian concept of nature. If we learn to see evolution as a single holarchy of holons working out the mutual consistency of cooperative health and opportunity, we can set up a social system to match that view.

History may someday record the greatest discovery of twentieth-century science not as nuclear power or electronics, but as the recognition that there is no absolute truth to be discovered about the world—that scientific theories can be judged only by their usefulness to science and ultimately to all society. Definitions of usefulness often change over time, and thus scientific ‘truths’ must necessarily evolve along with human society. (italicized emphasis in this paragraph Nick’s)

Neo-Darwinism insists that random accident and natural selection are the sole ‘mechanisms’ of evolution. Yet the self-organized creatures and ecosystems…such as that which we saw evolving through the genetic information exchange web of bacteria, included their negotiated organization of nucleated cells are not readily explained as simple accumulations of lucky accidents. Nor does natural selection amount to a real theory, since it tells us little more than that some creatures die before they reach the age of reproduction. A modern theory of evolution must concern itself with the way in which natural holons are organized and maintained in holarchies, with descriptions of continual interactions of DNA, organisms, and whole ecosystems.
(unquote, pp.108-110)

That’s a thumbnail of the “new school”. I’m immensely gratified to have found it. I’ve been thinking along similar lines for years, but not nearly as cogently as Sahtouris.

Read Full Post »

The cosmos is within us. We are made of star stuff. We’ve begun to wonder at last about our origins, star stuff contemplating the stars, organized collections of ten billion billion billion atoms contemplating the evolution of matter, tracing that long path by which it arrived at consciousness here on the planet Earth and perhaps throughout the cosmos. Our obligation to survive and flourish is owed not just to ourselves but also to that cosmos, ancient and vast, from which we sprang.

—Carl Sagan, via the religious skeptic Michael Shermer, who called the above quote “spiritual gold”, from Why Darwin Matters: The Case Against Intelligent Design

Finding that quote reminded me of something I extemporaneously made up over about a ten minute span while participating in Radio Open Source’s (misnamed) Daniel Dennett thread last year. It’s nothing more than a quick distillation of precepts I try to live by, precepts I’d never before tried to codify:

The energy and matter comprising us is earth and water, powered by sunlight and freshened by air. We each are ancient stardust made conscious by the seeming miracles that follow Earth’s absorption of sunlight. Therefore:
Revere all other humans: they are you in different bodies. They see, on your behalf, what you cannot. On your behalf they hear what you cannot. On your behalf they smell what you cannot. On your behalf they taste what you cannot.
And on your behalf they feel what you cannot.
Revere all other creatures: they may not ponder as profoundly as you, but they feel just as deeply.
Revere the plants: they feed you, whether directly or through animals that consume them, that you consume in turn.
Revere the mountains and the valleys, the forests and the deserts, the wild steppes and the tamed plains. Revere all water, no matter its amount. Earth and water combine with sunlight to make you and all other life.
Consider carefully – with empathy as your guide – the effect on other creatures and people any action you make.
After empathy guides you, choose the action that harms the fewest other sentient creatures.

Is this religion? Or morality? Or simple ethics?

I think it’s simple ethics. An ethics that needs no further embellishment, or sanction via unverifiable supernatural entity. Not once do these precepts mention ‘god’. Not even ‘divinity’. I would suggest that this simplistic creed is perhaps a purer form of ‘moral goodness’ than the morals propagated by most religious authorities.

My favorite route for going to church is the grueling two-hour climb up the Maynard Burn Trail, to 7,000 feet above sea level and a 200-mile view to any direction. It’s one of the world’s most magnificent cathedrals, and I usually have it to myself.
On reaching the summit, I sit, pant, and gawk in utter awe.
Every time.
The world I see up there is worthy of reverence – and even of veneration. But not of religion. Or of the sex-obsessed, freedom-restricting domain or discipline called morality.

Read Full Post »

Is my novel concept ‘empirication’ merely a convoluted synonym for hypothesis?
Hypothesis is only a fragment of it. Allow me to explain, first by offering the noun senses. Then, and much more vitally, by offering the verb senses: ‘empircate’, and ‘empiricating’.

I’m trying to originate a new common language concept, and not just another technical term. I’m also not wed to the word itself. The word, ideally, would be intuitively comprehensible. This one’s root is empiricism, which, unfortunately, is not an altogether common word. Still…

When our trans-civilizational conversation asks the many variants of this relatively new question, “God-given, or evolved?” – it also implicitly asks this: “Which epistemological source is more credible: that of unfalsifiable, unverifiable, (and putative) divine revelation, or that of the Scientific method?

In a very big sense, my offering of ‘empirication’ & ‘empiricate’ is all about that vast difference of epistemologies. It’s the difference between complacent, uncritical acceptance (belief) and restless inquisitiveness: that marriage of curiosity and skepticism called the Scientific method, and the eventual, provisional acceptance that a given description of, or explanation for, a phenomenon might be largely, albeit never completely, accurate.

1. Empirication as a noun.
Hypothesis is a technical term for a scientific ‘educated guess’. That qualifies it as a weak to middling-strength empirication.
A theory – an educated guess buttressed by increasingly persuasive analyses of empirically derived slates of corroborative evidence – is a middling to strong empirication.
And an estimate would likely be a weak empirication – so long as it is based on analyses of empirically derived evidence.

That’s the critical qualification: empirically derived. An estimate based on biblical scripture wouldn’t qualify as a empirication, because it’s unfalsifiable. Such an ‘estimate’ can only be a belief. You can’t empiricate the putative existence of the supernatural, because it is, by definition, unfalsifiable. And therefore unverifiable. You can only believe in the supernatural. Not empiricate it.

Same with, say, a ‘theory’ of “Numenorean morality”. It can look, feel, and read like a empirication, but if it’s based purely on human imagination (Tolkien’s) rather than anything falsifiable, it ain’t no empirication.
(Btw: I have yet to receive any plausible or cogent response to my several requests for evidence that the supernatural exists anywhere besides within the human mind, as an incorporeal product of our astounding talent to imagine.)

People can choose to ‘believe’ anything. You can choose to believe that the World Trade Center was demolished by Al Quaida, or by the CIA, or by alien brain parasites from the planet Xachtomokthor, which orbits the galaxy’s central black hole—evil alien ultraviolet parasites that invaded the minds of the terrorists or the minds of the putative CIA agents.
You cannot however empiricate the latter two options because no persuasive evidence can have entered your awareness to make the latter two more credible than the other possibility. (Especially since I just now made up the alien-brain-parasite nonsense.)

So, although empirications must rise from experimentally or observationally derived slates of evidence, empirications aren’t and can’t be ‘true’. I’ve written before that ‘truth’ strikes me as an illusion akin to Plato’s ‘ideal essences’. You can imagine an ‘ideal horse’, but can you find one in the material world? Isn’t such a putative entity entirely imaginary?

Here’s another way to put it: science is largely concerned with accurate descriptions of phenomena and patterns. Let’s say you describe something as simple and ordinary as a baseball bat. You can succeed very easily at giving its measurements, at describing the swirl of the wood grain, of the color and weight, and of the intricacies of its shape, and even the trademark. But what about the inner wood? You’d have to cut it open, destroying the bat in so doing. And even if you could MRI the wood of the intact bat, detecting the structural weaknesses of the thing that might help predict its eventual demise on its encounter with a Roger Clemens fastball, are you able to give the ‘truth’ of the bat if you stop short of detailing the bat’s individual constituent molecules? Atoms? Quarks?

“Truth”, it seems to me, is damn hard to apprehend. Perhaps impossible. That’s why physicist Lee Smolin calls scientific knowledge (in paraphrase) “those theories whose descriptions approach objective truth.”
“Approach” is critical: it’s an implicit admission of the foolhardiness of claims to “absolute truth.”

Belief, on the other hand, often makes just those (fantastical) claims.

Empirications therefore don’t seek ‘the truth’ (let alone “The Truth”). Empirications are mental acceptances of varying degrees of descriptive accuracy, such as theories and hypotheses.
Empirications can be largely accurate, somewhat accurate, or, after scrutiny, shown to be flatly inaccurate. And they must, by definition, be submitted to critical scrutiny. As I’ll try to show below in the verb section, many (all?) beliefs are, by definition, immune to this. Empirications are like opinions in that they can be easily revised toward greater descriptive accuracy or abandoned altogether in the face of contradictory analyses of empirically derived evidence. But not all opinions are empirications, since many opinions are not the outcomes of empirically derived analyses of evidence, but instead of unvetted or under-scrutinized conventional wisdom (see argumentum ad populum)—and of just plain wishful thinking.

2. Empiricate, the verb.
I was nearly dumbstruck a couple of weeks back by a conversation, on BBC radio, between a polite scientist and a creationist. They stood over the Grand Canyon offering their starkly different understandings of the Canyon’s origins. The scientist related her appreciation of the Canyon as a legacy of millions of years of erosion.
The creationist marveled at the Canyon as unmistakable evidence of the wrath of Noah’s Flood.

Can you sense the imbalance here?
The creationist countered the scientist’s appreciation by dismissing it as just another belief – like his belief – except that his belief was grounded in the One True Word of the One True God. But both viewpoints, he insisted, were beliefs. Yes, even hers.

Well, I beg to differ.
What in the world is happening here?
Riffing from a NYT piece on Scott Atran:
“Maybe it took less mental work than Atran realized to hold belief…in one’s mind. Maybe, in fact, belief was the default position for the human mind, something that took no cognitive effort at all.”

Do you sense the possible implications of that idea? If belief is a ‘default position’ – an easy option that demands the least cognitive effort – doesn’t that imply that other positions or options likely occur in human brains? ‘Positions’ – or activities more likely – requiring much more than the complacent stasis of simple, uncritical acceptance?

Indeed, might this not demark exactly the difference between complacent, uncritical acceptance and restless inquisitiveness?
Yet we have no single concept to distinguish these other activities, positions, or options that include restless inquisitiveness from the ‘default’ stasis called ‘belief’.

Is investigation just a synonym for unmoving and unmovable conviction?
Do scientists hypothesize from low- to no-effort belief, or from dynamic, challenging, and restlessly curious estimations, conjectures, and speculations? Do they hypothesize from unverifiable, unfalsifiable rote-learned dogma, or from the harder-to-grasp realm of the falsifiable, evidence-grounded plausible?

I’m striving to name—to newly conceptualize—the other, more cognitively demanding ‘positions’ (patterns) of conceptualization and deduction native to the human mind. Who of us do not estimate, or calculate? Many if not most of us simply call our provisional mental acceptances ‘beliefs’, even though these other, harder-to-generate thoughts aren’t simple, complacent, uncritical acceptances but tentative positions (or cognitive patterns) that we leave open to revision. Dynamic patterns. Not static ones.
Why do we call them beliefs? Because we haven’t recognized the need for a new—and vastly more accurately descriptive—concept?
If so, let’s start.

“Believing” is a very simple level of acceptance. “Hypothesizing” and “theorizing” are not. They take work. Young children believe very easily. Naturally, perhaps. It takes education and the greater facility of thought that comes with maturity, however, to learn how to empiricate.

When that creationist dismissed the scientist’s politely offered analysis, he wasn’t merely disagreeing: he was dissing countless hours of painstaking research, hypothesizing, theorizing, and other open-minded analyses. And he was able to do so only because we haven’t yet widely accepted a concept that distinguishes not merely ‘beliefs’ from evidence-based deductions, but also the low- to no-effort choice of believing from the greater-effort mind activities called hypothesizing, theorizing, and the conscientious analysis of empirically derived slates of evidence.

Soooo…whenever I read or hear a sentiment from a believer (of anything) that scientific theories are “just other beliefs” or “faiths”, or “just as faith based”, an (imaginary) gallon of gasoline spontaneously combusts within my cranium. Scientific theories – even those eventually abandoned – aren’t just ‘beliefs’. They are something much more substantial. They are falsifiable. Are beliefs? (Consider it — consider conspiracy theories, alien abductions, and supernatural ‘revelations’.)

Cutting-edge scientific cognition doesn’t rely on dogma but on the marriage of skepticism and inquisitiveness. Look, I leave open the possibility that our current understanding of theoretical evolution might itself evolve—and it should, because, as it stands today, it’s much too mechanomorphic. Misleadingly so.
But even if it turns out that today’s theory has some fundamental inaccuracies, it’s still a empirication and not a belief. It wasn’t ‘revealed’, but deduced. And then tested. And tested. And tested. And tested. Through this scrutiny it has evolved, and still has a ways to go yet (imho). But it ain’t much of an article of ‘faith’. And it takes a believer – an obstinate closed-minded believer like that creationist, and not an open-minded empiricator – to be so willfully blind to that.

In conclusion: recognizing, via a newly coined common English word, the vast differences between passive belief and the not-at-all passive, endlessly searching, and self-correcting dynamics of empirication, would be a huge step forward on our species’ journey from ignorance and superstition toward something more comprehensively and objectively knowledgeable. Even if someone coins a different word for that concept. I don’t care about that.

I just want to be able to converse, fluently, using that concept (whatever its eventual name). I need it as a cognitive tool. So, I suggest, do you.
Wanna help me out? Leave me feedback, please.

Remember: you can believe in Xachtomokthorians. But you can’t empiricate them. Why? Because they’re simply not empirically observable or testable – just like a zillion other human beliefs. Even the most venerated of those.

Read Full Post »

1.
Is credulity an “either/or” equation? Must you either believe or disbelieve? Must it be exclusively ‘black or white’?
Or is it instead possible to assess your surrender of credulity by degrees: in varying ‘shades of grey’?

If you suspect that a possible analysis or description of a phenomenon or event might be somewhat or largely accurate, is that the same level of credulity as 100% certainty?

If not, must it be this: “conviction of the truth of some statement or the reality of some being or phenomenon especially when based on examination of evidence”?

Or is it this: uncertainty? (A synonym for ‘suspicion’ – according to Merriam-Webster Online).

Is it possible to assign diverse degrees of plausibility to your assessments and thoughts?
Or is it only black or white – “true or false” — certain or uncertain – credible or incredible – with no room for:
hunches, guesses, whimsy, impression, reflection, inclination, speculation, conjecture, suggestion, surmise, suspicion, estimate, opinion, presupposition, hypothesis, tentative conclusion, and theory?

If there is room for uncertainty, is it possible that you, an independently thinking human being, can reserve portions of your credulity and still function day-to-day?
If you once drove a taxicab whose steering rack broke, must you ALWAYS have a mechanic check your steering mechanisms before driving any vehicle, ever again? Or can you estimate — hopefully but not with full confidence — that the likelihood of such a recurrence is possible but not probable enough to inhibit you from driving again?

Is 100% certainty a necessary precondition for human actions?
Or do we frequently (mis-) assume that our actions are based on the illusion of 100% certainty called ‘truth’, while instead almost always operating within the grey area of 1% to 99%?

2.
If you are a thinking human being who is aware of these grey areas and you articulate it, and your articulation is met with disrespectful derision and putative ‘corrections’, what ought you do?

If your own understanding of these shades of grey are labeled as ‘challengeable’, ‘self-delusionary’, and just plain ‘erroneous’ by a single voice unable or unwilling to comprehend your relative meanings, and these incurious, uncomprehending, & ponderously lecturing challenges seem to pursue your written thoughts in cyber-places you enjoy, when do they become ‘trollings’?

If your shades of credulous grey are consistently and egregiously mischaracterized as ‘beliefs’ – which you have explained, ad infintum, mean to you ‘convictions’ – which to you implies the 100% certainty you strive to avoid – and your troll knows this and insists on characterizing your written thoughts with this word anyway – does this constitute taunting?

If the troll knows this, and also knows that you accept, for him, his broader definition of ‘belief’ that means, for him, many (but not all) of the shades of grey, and he nevertheless insists on characterizing your written thoughts with his meaning in lieu of yours, is he being deliberately and provactively disrespectful?

At what point are you allowed to call the troll an uncomprehending, inconsiderate intellectual imperialist: obviously more interested in ‘proving’ the putative validity of his beliefs than understanding the nuances of your personal assessments of your own credulity?

When ought you complain to others (including blog-masters) of the inhibiting effects of such trollings?

Read Full Post »

I’ve suggested before a simile that concepts are like lenses: they can yield clear images, fuzzy images, distorted images, tinted images, etc. Concepts attempt to represent and articulate our perceptions of the universe beyond our skins. Some seem to work better – yielding greater clarity – than others. They are meanings in word form. Yet meaning is subjective: languages are dynamic, roiling, cognitive seas of similar yet not consistently identical meanings. Thus, not all people understand all concepts identically. What follows then is subjective, not objective. It is only a small current eddying out from my quiet cove of our shared sea of similar but not identical meanings…

Walter J. Freeman writes, “Meanings have no edges or compartments. They are not solely rational or emotional, but a mixture. They are not thought or beliefs, but the fabric of both… Meaning is closed from the outside by virtue of its very uniqueness and complexity. In this sense, it resembles the immunological incompatibility of tissues, by which each of us differs from all others. The barrier between us is not like a moat around a castle or a fire wall protecting a computer system; the meaning in each of us is a quiet universe that can be probed but not occupied.” — How Brains Make Up Their Minds, p.14

Reading this was enough to convert me from a ‘prescriptionist’ to a ‘descriptionist’ regarding dictionary definitions. It was enough to reassure me that my own idiosyncratic aggregates of meanings aren’t wrong, but simply, instead, mine.

This doesn’t mean that ‘anything goes’. It means instead that I should accept that what I understand of concepts, thoughts, and their larger aggregations called ‘beliefs’ (and plauses) isn’t necessarily universal—let alone ‘correct’. It means I must take special care when trying to articulate whatever I mean while using words that mark concepts about concepts and conceptualization.

Some concepts are qualitatively different than others. “The Sun,” for example, is a concept whose meaning is a celestial entity most humans experience tangibly. Belief is not required for one to understand the meaning of the words “the Sun”. Simple sensory awareness is sufficient.

Beliefs depend on meanings. But beliefs are only a kind of thought-pattern – and a special kind at that. If concepts can be understood (by me, at least) as lenses, then what are beliefs? At first, I thought, they’re telescopes: constructions of sequenced “lenses” that allow understanding of information beyond the realm of one’s senses.
But that felt wrong. Then I realized why.

Beliefs do something different than simply allowing extension of our senses. If concepts are lenses, beliefs are analyses, not instruments. They are pattern-recognition analyzers, assembled from multiple concepts.

A familiar example: belief in the biblical Genesis myth makes the Grand Canyon a legacy not of millions of years of erosion, but of the Noah’s Ark flood. Another: the perception that the stars “never move” (discounting the separately conceptualized planets) makes the night sky into ‘firmament’ rather than space. The “patterns” in the firmament (we are, if anything, a pattern-seeking species) can then represent earthly animals (zodiac), which the moving lights (planets) and luminaries (Sun & Moon) predictably transit. From this pattern-recognition analysis flows the unscientific fun called astrology.

In the second case, the key concepts are: the (seemingly) fixed nighttime sky, the plainly contrasting and distinctive moving lights (planets), and the two dominating luminaries – plus the human propensity to assign agency to natural phenomena in spite of a dearth of supporting evidence. Simmer for a few thousand years, add your local pantry of cultural spice, and voila! — A seemingly “naturally occurring” divinatory system – an elaborate structure of belief. A pattern recognition analyzer.

Beliefs arise from thoughts, which rely for their existence on the meanings the brain indefatigably produces (see Freeman’s book), but thoughts are not by themselves beliefs. A decision to play a Detroit Cobras CD is not a belief, but an impulse in cognitive form.
Whimsy is not belief, but merely a stream of consciousness in identifiable conceptual segments.
Beliefs are ‘meatier’ – they are thought complexes. You needn’t ‘believe in’ the Sun – you can feel it. You can also conflate the feeling of sunstroke with a feeling of malevolent supernatural agency, but forming that into a belief that the Sun is a God (or demon) requires many other conceptual components (like ‘malevolence’ and the multitudinous constituents of supernaturalism).

If beliefs are constructs, what are their components? Freeman says meanings, both cognitive and emotional. Concepts, of course, are meaningful by definition. So are differentiations and comparisons. Humans cogitate in (at least) two symbiotic ways: differentiation, and comparison. Differentiation perhaps comes first – but only barely. Comparisons begin as soon as we’ve differentiations to compare! Differentiations and comparisons are thoughts—meaningful thoughts—but not beliefs. Differentiations and comparisons are used to develop the conceptual aggregations called “beliefs”. They are the “meanings” stitched into Freeman’s “belief-fabric.” They require no belief on their own, and can serve many different beliefs – and many incompatible beliefs simultaneously!

Differentiation and comparison are meaning-in-motion, rather than meaning-as-fixed. They are dynamic rather than passive. Passive, ‘fixed’ meanings (if such a concept is actually valid) seem to mimic the ‘certainty’ of beliefs; while the dynamics of differentiation and comparison are restless, probing, and unsatisfied. For example, I needn’t ‘believe in’ the difference between a moving planet and fixed star – I can observe it. It’s a differentiation: the other celestial lights that are brighter than stars but magnitudinally smaller than the two luminaries aren’t “stars.” Their movements make them qualitatively different. Moreover, they do not move in unison but on their own idiosyncratic – yet predictable – schedules. Once I perceive this (if I am the first of my people to do so), I make the distinction via a new conceptualization. However, to extract further meaning – like divination – from that distinction, I must have access to other meanings, which I then assemble with my new distinction and stitch together as a belief.

“The Sun is like fire” is a simile: a comparison (and a fairly accurate perception, at that). However, “The Moon is female” is a metaphor – albeit one not necessarily understood to be merely a metaphor. The Moon earned its “femininity” from the observation that female human reproductive cycles are synchronized with the lunar cycle. Moon goddesses—yet more elaborately structured beliefs—rise in no small part from this observation. But assigning divinity to the Moon requires the additional component of the concept of divinity. Recognizing the synchronicity of the lunar and menstrual cycles isn’t enough on its own.

Similes make their nature as comparisons plain, via the words ‘like’ and ‘as’. Metaphors are more deceptive. They conflate different concepts. This fudging brings them to the brink of belief. Simile: “God is like the Sun – warming and all-seeing.” Metaphor: “God is the Sun.” Which, if taken literally, can birth many beliefs (in caricature): “Cover your head when in His presence lest He smite you.” And “Night is the Devil’s time; he uses the Moon for evil while God sleeps.”

Also—and vitally: metaphors, to be widely understood, must rely on more ‘fixed’ concepts than on dynamic, unsettled ones. Because of this, it seems to me that science uses no metaphors and few similes. Science seems instead to be the human skill of differentiation ‘on steroids’. (That’s a metaphor.) This now highly developed skill has illuminated countless patterns of nature that our recent ancestors were quite simply blind to. Science is popular for its findings, but not for its language – because people, it appears, prefer the easier arts of simile and metaphor to the cognitively harder skill of differentiation. Great fiction, for example, overflows with stimulating and surprising similes and metaphors. Science’s differentiations-on-steroids, on the other hand, require years of education to master (think of the Latinate classification system used in biology). Thus, when science popularizers employ literary styles and techniques (like comparisons), they sell many more books and teach many new discoveries, but their colleagues then nitpick the presentation of the findings!

So, are scientific findings ‘beliefs’? I’d guess this would seem to be a no-brainer. Surely, Darwin’s theory of evolution through natural selection would seem to be every bit the pattern-recognition-analyzer as the biblical creation tale. But I think otherwise. The findings of science are never ‘fixed’, but always malleable. They are meant to be accepted provisionally, not dogmatically. Scientists – the good ones, at any rate – are taught to expect the emergence of evidence that challenges their conclusions: ‘Think it possible you may be mistaken’, quotes scientist/mathematician Jacob Bronowski (thanks to enhabit for the quote).

If I notice that ‘belief’ is used in public rhetoric most frequently as a synonym for ‘conviction’ while we mere commoners sloppily conflate it with the uncertainties of opinion, I’m making a differentiation, drawing a distinction — a distinction that I hope illuminates a hitherto occluded nuance. This doesn’t mean that it’s “wrong” to conflate the two (although I can imply it). Instead, I’m striving to accurately describe a perception that arises from my own personal cove of meanings. I’m trying, I suppose, to be descriptionist instead of prescriptionist (as I was previously). And I’m trying it while using verbal markers that likely mean to me differently nuanced conceptions than they do to many if not most others.
With that in mind…

I perceive a qualitative, functional difference between pattern-recognition analyzers constructed from pre-scientific concepts and comparisons and the newer pattern-recognition analyzers constructed from concepts derived much more thoroughly from differentiation. In the former case, the solemn assurances of Authority are meant to instill certainty—conviction. Faith.
In the latter case, each new generation of differentiators are taught not complacency but skepticism: to assume that not only are long-held human assumptions questionable, but that even one’s own conclusions are. One’s own conclusions are only as sound as the evidence on which they are based – and new slates of evidence have a vexing tendency to turn up all the time.

Our new pattern-recognition analyzers are provisional, not fixed, because their constituent parts – differentiations – are dynamic, not static.

This is the difference between “belief” and “plause”. The difference between a pocketless mental straightjacket (convictions) and an armless utility vest featuring dozens of pockets to hold your (handily movable, not fixed-into-place) conceptual tools – the tools necessary for open-mindedness, for open-minded analyses of the bewildering yet beautiful universe we are conscious manifestations of.

The perception and conceptualization of differences and distinctions does not betray the presence of belief – it betrays its opposite: skepticism. Restlessness, not complacency. Openness, not firmament.

Question your assumptions and convictions. Examine them from the inside out: carefully differentiate their constituent concepts. Do the concepts comprising the belief-fabric yield images that you deem clear, or unfocused? Are they distorted? Fragmentary? Inaccurately tinted? Are they clear on the periphery but troublingly fuzzy in the center?
Do they accurately comport with your own understandings?
Or are some of them akin to foreign intrusions from a mind whose meanings, on close examination, seem alien, or biased, or simply obsolete?

I’ll grant you that it’s not easy, but you just might be as much rewarded as surprised at what you uncover.

Read Full Post »

What do scientists REALLY mean when they use the words ‘belief’ and ‘believe’?

Do they mean: “Mental acceptance of a claim as truth” (definition 1. – http://en.wiktionary.org/wiki/belief

Seems logical, right?
But perhaps not. See: http://en.wiktionary.org/wiki/scientific_method
Noun
scientific method (generally referred to in the definite, as the scientific method)
1. (science) 1854 A method of discovering knowledge based in making falsifiable predictions, testing them empirically, and preferring the simplest explanation that fits the known data.
(usage example:) While not perfect, the scientific method plays a crucial role in approaching objective truth.

Is it merely trivial that the usage example equivocates? ‘Approaching objective truth’ hardly denotes absolute certainty.
Meanwhile, Wikipedia’s article on the Scientific Method offers another substantial caveat:
All hypotheses and theories are in principle subject to disproof. Thus, there is a point at which there might be a consensus about a particular hypothesis or theory, yet it must in principle remain tentative. As a body of knowledge grows and a particular hypothesis or theory repeatedly brings predictable results, confidence in the hypothesis or theory increases.

http://en.wikipedia.org/wiki/Scientific_method

Note the wording: ‘confidence in the hypothesis or theory increases’.

When I first read it some 7-8 months ago, the Wikipedia article, in over 6,630 words, didn’t once employ the words belief or believe. Not once. (Although I think it now does in a digression of some sort.)

Belief, we might begin to infer, is not then a normal element of a scientist’s professional vocabulary. This isn’t to imply that scientists don’t commonly use the word in its noun and verb forms – every fluent speaker of English likely uses the words multiple times in any given day: they are surely among the language’s more common words.
However, the same can be said for the words ‘thing’ and ‘stuff’.
Perhaps then, ‘belief’ and ‘believe’ are words with wider casual meanings than we tend to realize: catch-all words used commonly within sloppy articulation. Perhaps ‘belief’, in everyday casual usage, typically denotes all manner of mental acceptances, from the impenetrable castle battlements of ‘faith’ to the leaky tent-walls of ‘conjecture’. Just as ‘things’ and ‘stuff’ often denote not things at all but events and activities, as in, “What have you been doing lately?” whose reply might be, “Oh, just web-surfing and blogging; you know, stuff like that”.
How often do you use this, “I believe such and such”, when you might actually mean, “I’ve heard such and such, and it makes sense to me although I can’t precisely prove it yet. But it makes enough sense to me that the likelihood it’s true is now one of my opinions”?
Think about it.
Worse, isn’t much easier to use “I believe”, even though it’s not merely imprecise but out-and-out incorrect?

Would we better served if English had a single word, corollary to but not synonymous with belief, that means, “I find such and such a plausible proposition; not enough to accept it as an absolute truth but enough so for it to seem equivalent to a scientific hypothesis and/or even a theory.”

To be fair, it’s unsurprising that no such word yet exists. English, after all, evolved within a worldview wherein explanations for being stemmed primarily from religious sources. Belief then, with its lack of any need for empirical support, was the only pertinent manner ‘mental of acceptance’ for most of English’s evolving existence.
Times have changed however – or people have, at least. We no longer depend upon religious authorities for explanations of existence and for a sense of humankind’s importance in the cosmos. The religious worldview, long unchallenged, has eroded steadily as a new worldview, the scientific, has gained widespread favor for its reliance on empirical investigation and for its painstaking attention to means of verification prior to offering its findings.

The religious worldview, however, is hardly admitting its failings and gracefully retiring. Religionists instead have become antagonistic to science and to the scientific worldview. Because science cannot find evidence to support belief in the existence of deity, or for all manner of religious doctrines, religionists attempt to discredit science. Because the lack of evidentiary support for ancient religious beliefs and dogmas casts deep shadows of doubt over the religious worldview, religionists demonize scientists.

Worse, and wrongly, they project their own manner of mental acceptance – belief – onto scientific understandings of the world. Language, as implied in the preceding paragraph, accidentally assists this mistaken conflation. Religionists mischaracterize scientists as ‘priests’ of a new and yet godless religion, while language again accidentally assists the mischaracterization.
Thus millions of people view scientific findings with deep skepticism if not outright incredulity, despite these findings, in the main, being convincingly verifiable and accurately predictive.

In sum, the scientific worldview, despite being little more than an innocent aggregate of millions of mostly valid fact-sets and explanations that ‘approach objective truth’, is under siege by the comparatively irrational worldview that preceded modern science for most of human existence. Scientists aren’t fanatics (though religionists, projecting their own personas onto those they deem their ‘opponents’, contend otherwise). Scientific training teaches thoughtfulness and respectful discourse: scientists are not, in the main, a truculent lot. Scientists (ideally) neither proselytize nor preach—they report and teach. This non-confrontational manner of communication effectively makes scientists poor defenders of their findings when faced by the irrational and fanatical religionists who besiege the scientific worldview.
Scientists must therefore defend their findings and worldview hampered: with the proverbial ‘one hand tied behind their backs.’

The problem, however, is merely perceptual. And conceptual.

It’s not simply that lay-people aren’t trained to parse the differences between unverifiable beliefs and evidentiary supported theories and hypotheses, although this is surely an element in the mix. The fact that our language is anchored in words whose meanings are not only imprecise but anchored in an ancient belief-dependent world means something more and worse. It means that conflating the scientific community with priesthoods comes naturally to those of us using the imprecise conceptual system of English.

And why not? If scientists have no better word choice than “We believe in the Theory of Evolution”, it is automatically equivalent to a priest or pope’s “We believe in the God revealed to us in the Bible.”
The tragedy, of course, is that scientists don’t believe in the Theory of Evolution; they subscribe to it because it consistently and accurately predicts and analyzes the current condition, ongoing changes, and evidence from the past of life on Earth. It requires no ‘leap of faith’, as does belief. It requires no ‘conviction.’

How much better would the scientific worldview be understood if its representatives could convey that scientists don’t ‘believe’?
From Dictionary.com:
‘worldview’
n. In both senses also called Weltanschauung.
1. The overall perspective from which one sees and interprets the world.
2. A collection of beliefs about life and the universe held by an individual or a group.
‘Worldview’, in today’s toxic mix of conflicting cultural milieus, isn’t trivial. Far from it. The religious worldview finds the scientific worldview offensive and amoral—if not outright immoral. Especially the Abrahamic religious worldview.
Every rational person, even non-fundamentalist ‘believers’, have a stake in this cold war between religious belief and scientific inquiry.

The religious worldview relies on the credulity of under-educated human minds.

The scientific worldview relies not on divinely imparted scriptures or dogmas, but on empirical investigation and exhaustive experimentation.
Science doesn’t ‘proselytize’, it reports.
It doesn’t ‘believe’, it theorizes.

Meaning no. 2 then listed above, requires a different fourth word. It requires a word that does not imply ‘mental acceptance without need of empirical support’ but instead means quite specifically ‘provisional mental acceptance—based on empirically obtained evidentiary support’.
Not ‘beliefs’.

So here it is, sitting unnoticed right under our noses for decades (from plausible: ‘seemingly or apparently valid, likely, or acceptable) –

plause: noun, 1. a tentative mental acceptance of an assertion dependent on empirically obtained evidentiary support; 2. an opinion, open to disconfirmation, that certain sets of empirically obtained facts form a credible basis for eventual designation of the phrase ‘approaching objective truth’; 3. a mental acceptance of a probability deduced from empirically obtained evidentiary support; 4. ‘a plausible reckoning’ (informal)

plause: verb, to tentatively or provisionally accept an assertion or conclusion deduced from empirically obtained evidentiary support

Example: “I simply can’t plause that humans, as Richard Dawkins believes, are merely ‘machines’ whose sole purpose or function is the replication one’s personal genome. My understanding of biology precludes the probability of Dawkins’s reductionist analysis.”

And: “It is one of my plauses that the so-called Big Bang is likely only one of many such phenomena that emerges every now and again from the fabric of space-time—and I have tentative evidentiary support for this plause.”

OR, “It is one of my plauses that the Earth and the other planets of the solar system revolve around the Sun, and that the solar system in turn revolves around the galactic center.”
Not controversial, that – at least not in this day and age. Indeed, it’s a valid enough notion to stand as ‘effectively proven’.
So why not make that sentence’s sixth word ‘beliefs’ instead of ‘plauses’?
Because belief denotes a ‘mental acceptance of a claim as truth’ that includes no threshold of evidentiary support.
It’s long past time to wholly cede the word ‘belief’ to the human fascination with religion, superstition, and conspiracy theories.

People in favor of reason, it’s high time for a new concept reflecting the truth of how we actually think.

I expect to have to edit this post, perhaps several times.

Read Full Post »

Older Posts »