Feeds:
Posts
Comments

Archive for the ‘Beliefs’ Category

Part 1: “A solipsism as big as all existence”

Lazare’s piece, like so much of this theism/nontheism discourse, is grounded in the longstanding, conceptually buttressed but objectively inaccurate premise that humans exist in the universe but are not (necessarily) the universe itself – the universe aware of itself. He seems to discern this though, although I can’t grasp how clearly, in the review’s penultimate sentence:
“In short, humanity creates meaning for itself by liberating itself so that it can fulfill itself. This is also a solipsism, but one as big as all existence.”

There’s an awful lot of very important meaning in those 28 words. I’ll leave Lazare’s use of ‘meaning’ aside for the moment, because I think that this – “liberating itself so that it can fulfill itself” – begs for a critical analysis.
What is humanity ‘liberating itself’ from? What is restraining humanity from ‘fulfillment’?
The material world?
Is this what’s meant by ‘transcendence’? A perceived need to escape the world we seem to be ‘born into, naked and shivering’? (Who wrote that? A poet? Shakespeare? Anyone know?)

If so, I strongly suspect that this is the real kernel of religion, and here’s why:
Science informs us that the atoms comprising our corporeal molecules can be split into smaller atoms or fused into larger ones, but never genuinely destroyed. These aggregations of energy (subatomic energy) are effectively “immortal”. Therefore what I call “my body” is comprised of energy packets that originated in the Big Bang. These energy packets of me-stuff were once the innards of stars before stellar fusion clumped them into the heavier elements that eventually gravitated into the third rock from the star nearest my typing fingers. After abiogenesis (whether this occurred beside deep-sea volcanic vents or in sunlight near the shores of the planet’s early oceans is a question far from settled) the me-stuff of rock became proto-bacterial life. Then, later yet—after inestimable millions of generations of evolution and flat-out gene-swapping (yes, bacteria can do this)—this microbial patterning of Big-Bang-Stuff became even more complex life before it became me. And since I must consume large quantities of complex life to replenish my me-stuff, I continue to convert biologically re-patterned Big-Bang-Stuff into me-stuff.

So what am “I”, then?
Evidently: ancient stardust as a biospherically engendered, locally mobile, (potentially) self-replicating pattern of the Big Bang in ongoing evolution. orlox incisively reconceptualizes it as ‘process history’:

If only process is fundamental where does the substance come from? If substance is a property of a second level of organization (so atoms have substance but electrons are just process) then atoms should be considered fundamental.

If process is fundamental, then all is process. An entity is a process history, not a discrete substance, even if defined as somehow more than the sum of its constituents.

Wonderfully articulated (although the italics are mine). Yet I’m something more than merely the Big Bang in its ongoing evolution (as if that isn’t an awesome enough realization!) – I’m also a self-aware facet of the Big Bang in ongoing evolution. We humans, in short, are something vastly greater than the mere consciousnesses that seems to “inhabit” our brains.

And this ain’t theology or religion, folks – it’s scientific deduction. And an inevitable scientific deduction: any sufficiently intelligent species of this biosphere or of biospheres elsewhere in the cosmos will likely eventually deduce this. It’s a ‘truth’ out there awaiting discovery: this universe evolves its energies and processes into stars, yes, but into intelligent biological patterns as well (we’re that theory’s inescapable supporting evidence). We don’t live “in” the universe as if it were a house, or a country, or even a potentially domesticable eco-system: it’s a stellar eco-system already that evolves itself into life – into us. We’re the stellar eco-system becoming aware of itself. We’re not “in” it, we are it.
We are “a solipsism as big as all existence”, although I’m probably using that phrase very differently from Lazare’s intention. (And this might be the only instance I can imagine of the concept “solipsism” being more genuinely useful than frivolous). The human species is the universe-as-solipsism, or, at the very least, the biosphere-as-solipsism.

Yet our inherited common language concepts don’t easily represent this realization. We much more easily say very odd things like, “I have a body” rather than the more accurate, “I am this body”. We seem to intuit that we inhabit our bodies, as if our individual lives are rentals on our way toward home-ownership in an entirely spurious hereafter.

And maybe it’s worse than merely linguistic: perhaps we’re consigned, by ancestral evolution-in-wild-environment of our very specialized primate-to-human-being consciousness, to perceive ourselves as ‘within’ the world but not exactly of the world. We seem hopelessly stuck with this sense of separateness – even though we’re not merely the atoms, measurable in kilograms or pounds, of our bodies but the entire universe aware of itself, as biospherically engendered, locally mobile ‘process histories’.

But are we hopelessly stuck? Must we eternally experience our lives from ‘in here’ looking ‘out there’? Might we ever learn to easily perceive our unity with the rest of the cosmos, or must we instead remain intuitively prey to the feeling of separation—which is the probable source of the ancient concept called ‘soul’?

Well, I can’t pretend to assure you that we can overcome the ‘intuitive perception’ part of that problem—maybe we can, maybe not. I can’t offer any predictions on that. But I don’t see why we can’t begin to more accurately conceptualize our inescapable unity with ongoing creation. It will require, as orlox has already done, rethinking ‘things’ – nouns – into events or actions – verbs. Or, perhaps creating a specialized hybrid of verb and noun that represents living patterns as processes instead of entities.

It will require another even more important revisioning too: a sense that we’re not small and vulnerable (although it’s intuitive to think that we are), but that our smallness and vulnerability is a kind of illusion. We are the universe aware of itself. We’re the whole damned thingall of the events that ever happened – even those we’re not yet aware of. Our ignorance of the events beyond our personal perceptions shouldn’t diminish us or our senses of self worth, however: we are the universe examining itself – and such examinations, especially by those just beginning to learn how to examine, cannot be concluded in an instant, or in a lifetime, or even in a millennium. Which implies then that in this phase of the universe-examining-itself, each of us is an invaluable examiner.

Think of a sphere: where is its center? Every point on it is its center, right? That’s the sort of metaphor necessary for the conceptualization we need to cleverly incorporate into our future verbiage – into our grammar of unity. (And no, I can’t claim to have worked up such any such cleverness myself – but I’m hardly a philosopher. Someone else, someone much more clever than me, will eventually do it.) Each of us is the universe’s center: none of us are more important than any other.

It’s hardly unreasonable to characterize us a ‘young species’: originating about 200,000 years ago, and as an evolution of the line(s) of hominids that diverged from the ancestors of contemporary apes about 6 or so million years ago. Grebes, in contrast, have a fossil record going back approximately 24 million years. Permit me then this metaphor: we’re an ‘adolescent’ species, growing from childlike naïveté toward a more richly experienced and knowledgeable – but still youthful – maturity.

And how do children learn? Via belief: by accepting as ‘the truth’ whatever their parents and elders tell them. Yet if this epistemology—rote-learning and careful imitation—were our only option, we’d still be knapping Neolithic flint and stitching sinew though pelts to clothe us. Human learning and cognition transcends the limitations of belief and faithful imitation: we innovate, not only technologically but, and much more crucially, conceptually. This symbiotic capacity to innovate technologically and to innovatively conceptualize allowed our ancestors to evolve beyond Neolithic culture, to evolve in turn beyond the Bronze Age, and, with breathtaking rapidity, to evolve into this ultra-technological and ultra-inquisitive culture that allows us to question and converse with one another from the privacy of our personal dens. We are a species unlike any to have preceded us (that we know of); and yet we are even more than that: we are this local neighborhood of the universe so hyper-aware of itself that we can discern the patterns of universal energy that comprise our individual selves and our greater self the cosmos.

We didn’t discover this new level of knowledge via belief. In our recent (metaphoric) puberty, one of us, named Augustine, wrote:

There is another form of temptation, even more fraught with danger. This is the disease of curiosity. It is this which drives us to try and discover the secrets of nature, those secrets which are beyond our understanding, which can avail us nothing and which man should not wish to learn. – Google Book Search

Shocked? If not, I suggest you should be. This sentiment by Augustine (a ‘Saint’, no less) is the philosophy of Dark Ages – and of faith.
Contrast that with this:

It is an essential part of the scientific enterprise to admit ignorance, even to exult in ignorance as a challenge to future conquests. As my friend Matt Ridley has written, ‘Most scientists are bored by what they have already discovered. It is ignorance that drives them on.’ Mystics exult in mystery and want it to stay mysterious. Scientists exult in mystery for another reason: it gives them something to do. More generally…one of the truly bad effects of religion is that it teaches us that it is a virtue to be satisfied with not understanding.
—Dawkins, The God Delusion, pp.125-6

Thank goodness for human curiosity. (But what a pity Augustine and his ilk influenced our ancestors for so many disease-ridden centuries, effectively obliterating the proto-science of the classical world and pushing the development of the scientific method a millennium and a half further into the future than it needed to be.)

So, I’m suggesting that humans, especially in the past couple of centuries, have uncovered, in quantities of thousands or even millions, of what had previously been “secrets and mysteries” of the cosmos, and that these discoveries are collectively forming a very different paradigm from this:
Lonely souls inhabiting fragile, clawless, fangless, unarmored bodies, bodies too often prey to disease, and yet made to endure a harsh and cruel world, spending a lifetime in a kind of post-womb exile before earning—or failing to earn—a rescue back into a state of perfect love, nurturance, and protection.
Let’s call it the ‘exiled-child’ paradigm.

Conversely, human science (the biosphere examining itself) – inspired by a symbiosis of curiosity, adventurousness, and imagination, has uncovered is a markedly different paradigm:
We are comprised of indestructible – “immortal” – cosmic energy, and we cannot be removed from our cosmic environment. We are not separate from the cosmos but the cosmos itself. We live no longer in the wild environment of our early human forebears but in a domesticated one: a world wherein we could (and hopefully will after maturing a bit longer) eradicate poverty and its associated sufferings.

Human science has found not only no sign of supernatural agency, it has found, progressively, discovery by discovery, no need for such an extra-universal agency. Einstein found belief in a personal god naïve; while the implications of more recent discoveries find no compelling need for even a Spinozan/deism kind of ‘celestial watchmaker’: an impersonal god who set the universe up, and then apparently vanished (and to where?!).

Yet human science isn’t just a bunch of nerds scheming to overthrow morality and uproot venerated ancient faiths; human science is the Earth itself investigating its cosmic neighborhood and its own holistic processes and constituents. This isn’t necessarily a ‘cosmic purpose’ though. I’d characterize it instead as the inevitable outcome of any planet’s biosphere, after evolving itself into environments (because life itself creates its own environmental conditions) that allow the emergence of a sufficiently inquisitive intelligence.

The ‘exiled-child’ paradigm might be plausible if we really were separate from the cosmos, but the Earth-as-human-science hasn’t found any evidence to support that. Then why do we feel that way intuitively? Probably as a consequence of our ancestors having evolved in a wild and hazardous environment instead of in the sort of domesticated one that has allowed our minds the peace and leisure to use our brains for much, much more than our ancestors had time to do. Fifty-thousand or more years ago, feeling holistically immortal as the living planet just might have made you-the-individual somewhat less suited to survive the various predators, animal and hominid, that our ancestors probably had to contend with.
But we don’t face threat of predation any longer – and haven’t for most if not all of the centuries since the Bronze Age at least. Which means we have found, at long last, an opportunity to evolve a sense of unity with the universe – as Zen Buddhists have strived to develop on an individual basis for more than a millennium. (And Buddhism, despite its curious, almost incongruous concerns with reincarnation, is a nontheist ‘religion’ – I put it in quotes because I don’t think of it as a religion as much as a philosophy.)

We can feel whole with the cosmos instead of feeling alien, vulnerable, and forlorn. We needn’t feel any longer that we are on trial for a berth in an afterlife: a manufactured soul undergoing a perverse sort of moral examination. We can instead begin to feel that we are the examiners – because we are!

Who wouldn’t welcome that?

Ah, actually, some folks would: the faithful. Adherents of the exiled-child paradigm.

What’s the solution? Education – but not dogmatic education. Lessons in not what to think but how to think: lessons in logical fallacy and in the scientific method; lessons in comparative religion and in all manner of ancient mythology alongside world literature. Because, as a guest on a recent ROS program said, we are “the storytelling species.” Education in the metaphor underlying our languages, too. (We chronically conflate metaphor with the reality the metaphor tries to illuminate by comparison, causing hopeless entanglements of semantics and keeping people from understanding one another’s points of view. In science, for example, Dawkins’s “meme= gene” metaphor is taken literally instead of metaphorically, and as a result folks are wasting years trying to assess “how memes operate” – as if metaphors are real!!!)

I’m running out of time, so I’ll wrap up hastily, post this, and link to it on ROS.

What I question fundamentally about Lazare’s review is his premise that “…humanity creates meaning for itself by liberating itself so that it can fulfill itself. ”
I’ll be referring to this objection, when I write part 2 of this response later (and it will be much shorter than this). I suggest the only liberation necessary for humans to achieve a more fulfilling existence is liberation from the exiled-child paradigm. We’re rapidly outgrowing it. Manning on ROS wrote:

Are modern religions essentially adult versions of the story of Santa Claus? If so, does the editor’s comment that there is a Santa Claus tell us something more profound than how to comfort a child?

I would suggest that we as a species are (metaphorically) in the phase when some of us have deduced that Santa doesn’t exist except as a character of myth. Others of us feel a powerful need to deny this. Einstein wrote: “The idea of a personal God is quite alien to me and even seems naïve.”
I agree – but I would NOT say the same for an interpersonal god. Why not honor one another as gods and goddesses? Don’t the monotheisms imply that we should?

The energy and matter comprising us is earth and water, powered by sunlight and freshened by air. We each are ancient stardust made conscious by the seeming miracles that follow Earth’s absorption of sunlight. Therefore:
Revere all other humans: they are you in different bodies. They see, on your behalf, what you cannot. On your behalf they hear what you cannot. On your behalf they smell what you cannot. On your behalf they taste what you cannot.
And on your behalf they feel what you cannot.
Revere all other creatures: they may not ponder as profoundly as you, but they feel just as deeply.
Revere the plants: they feed you, whether directly or through animals that consume them, that you consume in turn.
Revere the mountains and the valleys, the forests and the deserts, the wild steppes and the tamed plains. Revere all water, no matter its amount. Earth and water combine with sunlight to make you and all other life.
Consider carefully – with empathy as your guide – the effect on other creatures and people any action you make.
After empathy guides you, choose the action that harms the fewest other sentient creatures.
A Cosmic Perspective

Read Full Post »

The cosmos is within us. We are made of star stuff. We’ve begun to wonder at last about our origins, star stuff contemplating the stars, organized collections of ten billion billion billion atoms contemplating the evolution of matter, tracing that long path by which it arrived at consciousness here on the planet Earth and perhaps throughout the cosmos. Our obligation to survive and flourish is owed not just to ourselves but also to that cosmos, ancient and vast, from which we sprang.

—Carl Sagan, via the religious skeptic Michael Shermer, who called the above quote “spiritual gold”, from Why Darwin Matters: The Case Against Intelligent Design

Finding that quote reminded me of something I extemporaneously made up over about a ten minute span while participating in Radio Open Source’s (misnamed) Daniel Dennett thread last year. It’s nothing more than a quick distillation of precepts I try to live by, precepts I’d never before tried to codify:

The energy and matter comprising us is earth and water, powered by sunlight and freshened by air. We each are ancient stardust made conscious by the seeming miracles that follow Earth’s absorption of sunlight. Therefore:
Revere all other humans: they are you in different bodies. They see, on your behalf, what you cannot. On your behalf they hear what you cannot. On your behalf they smell what you cannot. On your behalf they taste what you cannot.
And on your behalf they feel what you cannot.
Revere all other creatures: they may not ponder as profoundly as you, but they feel just as deeply.
Revere the plants: they feed you, whether directly or through animals that consume them, that you consume in turn.
Revere the mountains and the valleys, the forests and the deserts, the wild steppes and the tamed plains. Revere all water, no matter its amount. Earth and water combine with sunlight to make you and all other life.
Consider carefully – with empathy as your guide – the effect on other creatures and people any action you make.
After empathy guides you, choose the action that harms the fewest other sentient creatures.

Is this religion? Or morality? Or simple ethics?

I think it’s simple ethics. An ethics that needs no further embellishment, or sanction via unverifiable supernatural entity. Not once do these precepts mention ‘god’. Not even ‘divinity’. I would suggest that this simplistic creed is perhaps a purer form of ‘moral goodness’ than the morals propagated by most religious authorities.

My favorite route for going to church is the grueling two-hour climb up the Maynard Burn Trail, to 7,000 feet above sea level and a 200-mile view to any direction. It’s one of the world’s most magnificent cathedrals, and I usually have it to myself.
On reaching the summit, I sit, pant, and gawk in utter awe.
Every time.
The world I see up there is worthy of reverence – and even of veneration. But not of religion. Or of the sex-obsessed, freedom-restricting domain or discipline called morality.

Read Full Post »

Is my novel concept ‘empirication’ merely a convoluted synonym for hypothesis?
Hypothesis is only a fragment of it. Allow me to explain, first by offering the noun senses. Then, and much more vitally, by offering the verb senses: ‘empircate’, and ‘empiricating’.

I’m trying to originate a new common language concept, and not just another technical term. I’m also not wed to the word itself. The word, ideally, would be intuitively comprehensible. This one’s root is empiricism, which, unfortunately, is not an altogether common word. Still…

When our trans-civilizational conversation asks the many variants of this relatively new question, “God-given, or evolved?” – it also implicitly asks this: “Which epistemological source is more credible: that of unfalsifiable, unverifiable, (and putative) divine revelation, or that of the Scientific method?

In a very big sense, my offering of ‘empirication’ & ‘empiricate’ is all about that vast difference of epistemologies. It’s the difference between complacent, uncritical acceptance (belief) and restless inquisitiveness: that marriage of curiosity and skepticism called the Scientific method, and the eventual, provisional acceptance that a given description of, or explanation for, a phenomenon might be largely, albeit never completely, accurate.

1. Empirication as a noun.
Hypothesis is a technical term for a scientific ‘educated guess’. That qualifies it as a weak to middling-strength empirication.
A theory – an educated guess buttressed by increasingly persuasive analyses of empirically derived slates of corroborative evidence – is a middling to strong empirication.
And an estimate would likely be a weak empirication – so long as it is based on analyses of empirically derived evidence.

That’s the critical qualification: empirically derived. An estimate based on biblical scripture wouldn’t qualify as a empirication, because it’s unfalsifiable. Such an ‘estimate’ can only be a belief. You can’t empiricate the putative existence of the supernatural, because it is, by definition, unfalsifiable. And therefore unverifiable. You can only believe in the supernatural. Not empiricate it.

Same with, say, a ‘theory’ of “Numenorean morality”. It can look, feel, and read like a empirication, but if it’s based purely on human imagination (Tolkien’s) rather than anything falsifiable, it ain’t no empirication.
(Btw: I have yet to receive any plausible or cogent response to my several requests for evidence that the supernatural exists anywhere besides within the human mind, as an incorporeal product of our astounding talent to imagine.)

People can choose to ‘believe’ anything. You can choose to believe that the World Trade Center was demolished by Al Quaida, or by the CIA, or by alien brain parasites from the planet Xachtomokthor, which orbits the galaxy’s central black hole—evil alien ultraviolet parasites that invaded the minds of the terrorists or the minds of the putative CIA agents.
You cannot however empiricate the latter two options because no persuasive evidence can have entered your awareness to make the latter two more credible than the other possibility. (Especially since I just now made up the alien-brain-parasite nonsense.)

So, although empirications must rise from experimentally or observationally derived slates of evidence, empirications aren’t and can’t be ‘true’. I’ve written before that ‘truth’ strikes me as an illusion akin to Plato’s ‘ideal essences’. You can imagine an ‘ideal horse’, but can you find one in the material world? Isn’t such a putative entity entirely imaginary?

Here’s another way to put it: science is largely concerned with accurate descriptions of phenomena and patterns. Let’s say you describe something as simple and ordinary as a baseball bat. You can succeed very easily at giving its measurements, at describing the swirl of the wood grain, of the color and weight, and of the intricacies of its shape, and even the trademark. But what about the inner wood? You’d have to cut it open, destroying the bat in so doing. And even if you could MRI the wood of the intact bat, detecting the structural weaknesses of the thing that might help predict its eventual demise on its encounter with a Roger Clemens fastball, are you able to give the ‘truth’ of the bat if you stop short of detailing the bat’s individual constituent molecules? Atoms? Quarks?

“Truth”, it seems to me, is damn hard to apprehend. Perhaps impossible. That’s why physicist Lee Smolin calls scientific knowledge (in paraphrase) “those theories whose descriptions approach objective truth.”
“Approach” is critical: it’s an implicit admission of the foolhardiness of claims to “absolute truth.”

Belief, on the other hand, often makes just those (fantastical) claims.

Empirications therefore don’t seek ‘the truth’ (let alone “The Truth”). Empirications are mental acceptances of varying degrees of descriptive accuracy, such as theories and hypotheses.
Empirications can be largely accurate, somewhat accurate, or, after scrutiny, shown to be flatly inaccurate. And they must, by definition, be submitted to critical scrutiny. As I’ll try to show below in the verb section, many (all?) beliefs are, by definition, immune to this. Empirications are like opinions in that they can be easily revised toward greater descriptive accuracy or abandoned altogether in the face of contradictory analyses of empirically derived evidence. But not all opinions are empirications, since many opinions are not the outcomes of empirically derived analyses of evidence, but instead of unvetted or under-scrutinized conventional wisdom (see argumentum ad populum)—and of just plain wishful thinking.

2. Empiricate, the verb.
I was nearly dumbstruck a couple of weeks back by a conversation, on BBC radio, between a polite scientist and a creationist. They stood over the Grand Canyon offering their starkly different understandings of the Canyon’s origins. The scientist related her appreciation of the Canyon as a legacy of millions of years of erosion.
The creationist marveled at the Canyon as unmistakable evidence of the wrath of Noah’s Flood.

Can you sense the imbalance here?
The creationist countered the scientist’s appreciation by dismissing it as just another belief – like his belief – except that his belief was grounded in the One True Word of the One True God. But both viewpoints, he insisted, were beliefs. Yes, even hers.

Well, I beg to differ.
What in the world is happening here?
Riffing from a NYT piece on Scott Atran:
“Maybe it took less mental work than Atran realized to hold belief…in one’s mind. Maybe, in fact, belief was the default position for the human mind, something that took no cognitive effort at all.”

Do you sense the possible implications of that idea? If belief is a ‘default position’ – an easy option that demands the least cognitive effort – doesn’t that imply that other positions or options likely occur in human brains? ‘Positions’ – or activities more likely – requiring much more than the complacent stasis of simple, uncritical acceptance?

Indeed, might this not demark exactly the difference between complacent, uncritical acceptance and restless inquisitiveness?
Yet we have no single concept to distinguish these other activities, positions, or options that include restless inquisitiveness from the ‘default’ stasis called ‘belief’.

Is investigation just a synonym for unmoving and unmovable conviction?
Do scientists hypothesize from low- to no-effort belief, or from dynamic, challenging, and restlessly curious estimations, conjectures, and speculations? Do they hypothesize from unverifiable, unfalsifiable rote-learned dogma, or from the harder-to-grasp realm of the falsifiable, evidence-grounded plausible?

I’m striving to name—to newly conceptualize—the other, more cognitively demanding ‘positions’ (patterns) of conceptualization and deduction native to the human mind. Who of us do not estimate, or calculate? Many if not most of us simply call our provisional mental acceptances ‘beliefs’, even though these other, harder-to-generate thoughts aren’t simple, complacent, uncritical acceptances but tentative positions (or cognitive patterns) that we leave open to revision. Dynamic patterns. Not static ones.
Why do we call them beliefs? Because we haven’t recognized the need for a new—and vastly more accurately descriptive—concept?
If so, let’s start.

“Believing” is a very simple level of acceptance. “Hypothesizing” and “theorizing” are not. They take work. Young children believe very easily. Naturally, perhaps. It takes education and the greater facility of thought that comes with maturity, however, to learn how to empiricate.

When that creationist dismissed the scientist’s politely offered analysis, he wasn’t merely disagreeing: he was dissing countless hours of painstaking research, hypothesizing, theorizing, and other open-minded analyses. And he was able to do so only because we haven’t yet widely accepted a concept that distinguishes not merely ‘beliefs’ from evidence-based deductions, but also the low- to no-effort choice of believing from the greater-effort mind activities called hypothesizing, theorizing, and the conscientious analysis of empirically derived slates of evidence.

Soooo…whenever I read or hear a sentiment from a believer (of anything) that scientific theories are “just other beliefs” or “faiths”, or “just as faith based”, an (imaginary) gallon of gasoline spontaneously combusts within my cranium. Scientific theories – even those eventually abandoned – aren’t just ‘beliefs’. They are something much more substantial. They are falsifiable. Are beliefs? (Consider it — consider conspiracy theories, alien abductions, and supernatural ‘revelations’.)

Cutting-edge scientific cognition doesn’t rely on dogma but on the marriage of skepticism and inquisitiveness. Look, I leave open the possibility that our current understanding of theoretical evolution might itself evolve—and it should, because, as it stands today, it’s much too mechanomorphic. Misleadingly so.
But even if it turns out that today’s theory has some fundamental inaccuracies, it’s still a empirication and not a belief. It wasn’t ‘revealed’, but deduced. And then tested. And tested. And tested. And tested. Through this scrutiny it has evolved, and still has a ways to go yet (imho). But it ain’t much of an article of ‘faith’. And it takes a believer – an obstinate closed-minded believer like that creationist, and not an open-minded empiricator – to be so willfully blind to that.

In conclusion: recognizing, via a newly coined common English word, the vast differences between passive belief and the not-at-all passive, endlessly searching, and self-correcting dynamics of empirication, would be a huge step forward on our species’ journey from ignorance and superstition toward something more comprehensively and objectively knowledgeable. Even if someone coins a different word for that concept. I don’t care about that.

I just want to be able to converse, fluently, using that concept (whatever its eventual name). I need it as a cognitive tool. So, I suggest, do you.
Wanna help me out? Leave me feedback, please.

Remember: you can believe in Xachtomokthorians. But you can’t empiricate them. Why? Because they’re simply not empirically observable or testable – just like a zillion other human beliefs. Even the most venerated of those.

Read Full Post »

1.
Is credulity an “either/or” equation? Must you either believe or disbelieve? Must it be exclusively ‘black or white’?
Or is it instead possible to assess your surrender of credulity by degrees: in varying ‘shades of grey’?

If you suspect that a possible analysis or description of a phenomenon or event might be somewhat or largely accurate, is that the same level of credulity as 100% certainty?

If not, must it be this: “conviction of the truth of some statement or the reality of some being or phenomenon especially when based on examination of evidence”?

Or is it this: uncertainty? (A synonym for ‘suspicion’ – according to Merriam-Webster Online).

Is it possible to assign diverse degrees of plausibility to your assessments and thoughts?
Or is it only black or white – “true or false” — certain or uncertain – credible or incredible – with no room for:
hunches, guesses, whimsy, impression, reflection, inclination, speculation, conjecture, suggestion, surmise, suspicion, estimate, opinion, presupposition, hypothesis, tentative conclusion, and theory?

If there is room for uncertainty, is it possible that you, an independently thinking human being, can reserve portions of your credulity and still function day-to-day?
If you once drove a taxicab whose steering rack broke, must you ALWAYS have a mechanic check your steering mechanisms before driving any vehicle, ever again? Or can you estimate — hopefully but not with full confidence — that the likelihood of such a recurrence is possible but not probable enough to inhibit you from driving again?

Is 100% certainty a necessary precondition for human actions?
Or do we frequently (mis-) assume that our actions are based on the illusion of 100% certainty called ‘truth’, while instead almost always operating within the grey area of 1% to 99%?

2.
If you are a thinking human being who is aware of these grey areas and you articulate it, and your articulation is met with disrespectful derision and putative ‘corrections’, what ought you do?

If your own understanding of these shades of grey are labeled as ‘challengeable’, ‘self-delusionary’, and just plain ‘erroneous’ by a single voice unable or unwilling to comprehend your relative meanings, and these incurious, uncomprehending, & ponderously lecturing challenges seem to pursue your written thoughts in cyber-places you enjoy, when do they become ‘trollings’?

If your shades of credulous grey are consistently and egregiously mischaracterized as ‘beliefs’ – which you have explained, ad infintum, mean to you ‘convictions’ – which to you implies the 100% certainty you strive to avoid – and your troll knows this and insists on characterizing your written thoughts with this word anyway – does this constitute taunting?

If the troll knows this, and also knows that you accept, for him, his broader definition of ‘belief’ that means, for him, many (but not all) of the shades of grey, and he nevertheless insists on characterizing your written thoughts with his meaning in lieu of yours, is he being deliberately and provactively disrespectful?

At what point are you allowed to call the troll an uncomprehending, inconsiderate intellectual imperialist: obviously more interested in ‘proving’ the putative validity of his beliefs than understanding the nuances of your personal assessments of your own credulity?

When ought you complain to others (including blog-masters) of the inhibiting effects of such trollings?

Read Full Post »

I’ve suggested before a simile that concepts are like lenses: they can yield clear images, fuzzy images, distorted images, tinted images, etc. Concepts attempt to represent and articulate our perceptions of the universe beyond our skins. Some seem to work better – yielding greater clarity – than others. They are meanings in word form. Yet meaning is subjective: languages are dynamic, roiling, cognitive seas of similar yet not consistently identical meanings. Thus, not all people understand all concepts identically. What follows then is subjective, not objective. It is only a small current eddying out from my quiet cove of our shared sea of similar but not identical meanings…

Walter J. Freeman writes, “Meanings have no edges or compartments. They are not solely rational or emotional, but a mixture. They are not thought or beliefs, but the fabric of both… Meaning is closed from the outside by virtue of its very uniqueness and complexity. In this sense, it resembles the immunological incompatibility of tissues, by which each of us differs from all others. The barrier between us is not like a moat around a castle or a fire wall protecting a computer system; the meaning in each of us is a quiet universe that can be probed but not occupied.” — How Brains Make Up Their Minds, p.14

Reading this was enough to convert me from a ‘prescriptionist’ to a ‘descriptionist’ regarding dictionary definitions. It was enough to reassure me that my own idiosyncratic aggregates of meanings aren’t wrong, but simply, instead, mine.

This doesn’t mean that ‘anything goes’. It means instead that I should accept that what I understand of concepts, thoughts, and their larger aggregations called ‘beliefs’ (and plauses) isn’t necessarily universal—let alone ‘correct’. It means I must take special care when trying to articulate whatever I mean while using words that mark concepts about concepts and conceptualization.

Some concepts are qualitatively different than others. “The Sun,” for example, is a concept whose meaning is a celestial entity most humans experience tangibly. Belief is not required for one to understand the meaning of the words “the Sun”. Simple sensory awareness is sufficient.

Beliefs depend on meanings. But beliefs are only a kind of thought-pattern – and a special kind at that. If concepts can be understood (by me, at least) as lenses, then what are beliefs? At first, I thought, they’re telescopes: constructions of sequenced “lenses” that allow understanding of information beyond the realm of one’s senses.
But that felt wrong. Then I realized why.

Beliefs do something different than simply allowing extension of our senses. If concepts are lenses, beliefs are analyses, not instruments. They are pattern-recognition analyzers, assembled from multiple concepts.

A familiar example: belief in the biblical Genesis myth makes the Grand Canyon a legacy not of millions of years of erosion, but of the Noah’s Ark flood. Another: the perception that the stars “never move” (discounting the separately conceptualized planets) makes the night sky into ‘firmament’ rather than space. The “patterns” in the firmament (we are, if anything, a pattern-seeking species) can then represent earthly animals (zodiac), which the moving lights (planets) and luminaries (Sun & Moon) predictably transit. From this pattern-recognition analysis flows the unscientific fun called astrology.

In the second case, the key concepts are: the (seemingly) fixed nighttime sky, the plainly contrasting and distinctive moving lights (planets), and the two dominating luminaries – plus the human propensity to assign agency to natural phenomena in spite of a dearth of supporting evidence. Simmer for a few thousand years, add your local pantry of cultural spice, and voila! — A seemingly “naturally occurring” divinatory system – an elaborate structure of belief. A pattern recognition analyzer.

Beliefs arise from thoughts, which rely for their existence on the meanings the brain indefatigably produces (see Freeman’s book), but thoughts are not by themselves beliefs. A decision to play a Detroit Cobras CD is not a belief, but an impulse in cognitive form.
Whimsy is not belief, but merely a stream of consciousness in identifiable conceptual segments.
Beliefs are ‘meatier’ – they are thought complexes. You needn’t ‘believe in’ the Sun – you can feel it. You can also conflate the feeling of sunstroke with a feeling of malevolent supernatural agency, but forming that into a belief that the Sun is a God (or demon) requires many other conceptual components (like ‘malevolence’ and the multitudinous constituents of supernaturalism).

If beliefs are constructs, what are their components? Freeman says meanings, both cognitive and emotional. Concepts, of course, are meaningful by definition. So are differentiations and comparisons. Humans cogitate in (at least) two symbiotic ways: differentiation, and comparison. Differentiation perhaps comes first – but only barely. Comparisons begin as soon as we’ve differentiations to compare! Differentiations and comparisons are thoughts—meaningful thoughts—but not beliefs. Differentiations and comparisons are used to develop the conceptual aggregations called “beliefs”. They are the “meanings” stitched into Freeman’s “belief-fabric.” They require no belief on their own, and can serve many different beliefs – and many incompatible beliefs simultaneously!

Differentiation and comparison are meaning-in-motion, rather than meaning-as-fixed. They are dynamic rather than passive. Passive, ‘fixed’ meanings (if such a concept is actually valid) seem to mimic the ‘certainty’ of beliefs; while the dynamics of differentiation and comparison are restless, probing, and unsatisfied. For example, I needn’t ‘believe in’ the difference between a moving planet and fixed star – I can observe it. It’s a differentiation: the other celestial lights that are brighter than stars but magnitudinally smaller than the two luminaries aren’t “stars.” Their movements make them qualitatively different. Moreover, they do not move in unison but on their own idiosyncratic – yet predictable – schedules. Once I perceive this (if I am the first of my people to do so), I make the distinction via a new conceptualization. However, to extract further meaning – like divination – from that distinction, I must have access to other meanings, which I then assemble with my new distinction and stitch together as a belief.

“The Sun is like fire” is a simile: a comparison (and a fairly accurate perception, at that). However, “The Moon is female” is a metaphor – albeit one not necessarily understood to be merely a metaphor. The Moon earned its “femininity” from the observation that female human reproductive cycles are synchronized with the lunar cycle. Moon goddesses—yet more elaborately structured beliefs—rise in no small part from this observation. But assigning divinity to the Moon requires the additional component of the concept of divinity. Recognizing the synchronicity of the lunar and menstrual cycles isn’t enough on its own.

Similes make their nature as comparisons plain, via the words ‘like’ and ‘as’. Metaphors are more deceptive. They conflate different concepts. This fudging brings them to the brink of belief. Simile: “God is like the Sun – warming and all-seeing.” Metaphor: “God is the Sun.” Which, if taken literally, can birth many beliefs (in caricature): “Cover your head when in His presence lest He smite you.” And “Night is the Devil’s time; he uses the Moon for evil while God sleeps.”

Also—and vitally: metaphors, to be widely understood, must rely on more ‘fixed’ concepts than on dynamic, unsettled ones. Because of this, it seems to me that science uses no metaphors and few similes. Science seems instead to be the human skill of differentiation ‘on steroids’. (That’s a metaphor.) This now highly developed skill has illuminated countless patterns of nature that our recent ancestors were quite simply blind to. Science is popular for its findings, but not for its language – because people, it appears, prefer the easier arts of simile and metaphor to the cognitively harder skill of differentiation. Great fiction, for example, overflows with stimulating and surprising similes and metaphors. Science’s differentiations-on-steroids, on the other hand, require years of education to master (think of the Latinate classification system used in biology). Thus, when science popularizers employ literary styles and techniques (like comparisons), they sell many more books and teach many new discoveries, but their colleagues then nitpick the presentation of the findings!

So, are scientific findings ‘beliefs’? I’d guess this would seem to be a no-brainer. Surely, Darwin’s theory of evolution through natural selection would seem to be every bit the pattern-recognition-analyzer as the biblical creation tale. But I think otherwise. The findings of science are never ‘fixed’, but always malleable. They are meant to be accepted provisionally, not dogmatically. Scientists – the good ones, at any rate – are taught to expect the emergence of evidence that challenges their conclusions: ‘Think it possible you may be mistaken’, quotes scientist/mathematician Jacob Bronowski (thanks to enhabit for the quote).

If I notice that ‘belief’ is used in public rhetoric most frequently as a synonym for ‘conviction’ while we mere commoners sloppily conflate it with the uncertainties of opinion, I’m making a differentiation, drawing a distinction — a distinction that I hope illuminates a hitherto occluded nuance. This doesn’t mean that it’s “wrong” to conflate the two (although I can imply it). Instead, I’m striving to accurately describe a perception that arises from my own personal cove of meanings. I’m trying, I suppose, to be descriptionist instead of prescriptionist (as I was previously). And I’m trying it while using verbal markers that likely mean to me differently nuanced conceptions than they do to many if not most others.
With that in mind…

I perceive a qualitative, functional difference between pattern-recognition analyzers constructed from pre-scientific concepts and comparisons and the newer pattern-recognition analyzers constructed from concepts derived much more thoroughly from differentiation. In the former case, the solemn assurances of Authority are meant to instill certainty—conviction. Faith.
In the latter case, each new generation of differentiators are taught not complacency but skepticism: to assume that not only are long-held human assumptions questionable, but that even one’s own conclusions are. One’s own conclusions are only as sound as the evidence on which they are based – and new slates of evidence have a vexing tendency to turn up all the time.

Our new pattern-recognition analyzers are provisional, not fixed, because their constituent parts – differentiations – are dynamic, not static.

This is the difference between “belief” and “plause”. The difference between a pocketless mental straightjacket (convictions) and an armless utility vest featuring dozens of pockets to hold your (handily movable, not fixed-into-place) conceptual tools – the tools necessary for open-mindedness, for open-minded analyses of the bewildering yet beautiful universe we are conscious manifestations of.

The perception and conceptualization of differences and distinctions does not betray the presence of belief – it betrays its opposite: skepticism. Restlessness, not complacency. Openness, not firmament.

Question your assumptions and convictions. Examine them from the inside out: carefully differentiate their constituent concepts. Do the concepts comprising the belief-fabric yield images that you deem clear, or unfocused? Are they distorted? Fragmentary? Inaccurately tinted? Are they clear on the periphery but troublingly fuzzy in the center?
Do they accurately comport with your own understandings?
Or are some of them akin to foreign intrusions from a mind whose meanings, on close examination, seem alien, or biased, or simply obsolete?

I’ll grant you that it’s not easy, but you just might be as much rewarded as surprised at what you uncover.

Read Full Post »

What do scientists REALLY mean when they use the words ‘belief’ and ‘believe’?

Do they mean: “Mental acceptance of a claim as truth” (definition 1. – http://en.wiktionary.org/wiki/belief

Seems logical, right?
But perhaps not. See: http://en.wiktionary.org/wiki/scientific_method
Noun
scientific method (generally referred to in the definite, as the scientific method)
1. (science) 1854 A method of discovering knowledge based in making falsifiable predictions, testing them empirically, and preferring the simplest explanation that fits the known data.
(usage example:) While not perfect, the scientific method plays a crucial role in approaching objective truth.

Is it merely trivial that the usage example equivocates? ‘Approaching objective truth’ hardly denotes absolute certainty.
Meanwhile, Wikipedia’s article on the Scientific Method offers another substantial caveat:
All hypotheses and theories are in principle subject to disproof. Thus, there is a point at which there might be a consensus about a particular hypothesis or theory, yet it must in principle remain tentative. As a body of knowledge grows and a particular hypothesis or theory repeatedly brings predictable results, confidence in the hypothesis or theory increases.

http://en.wikipedia.org/wiki/Scientific_method

Note the wording: ‘confidence in the hypothesis or theory increases’.

When I first read it some 7-8 months ago, the Wikipedia article, in over 6,630 words, didn’t once employ the words belief or believe. Not once. (Although I think it now does in a digression of some sort.)

Belief, we might begin to infer, is not then a normal element of a scientist’s professional vocabulary. This isn’t to imply that scientists don’t commonly use the word in its noun and verb forms – every fluent speaker of English likely uses the words multiple times in any given day: they are surely among the language’s more common words.
However, the same can be said for the words ‘thing’ and ‘stuff’.
Perhaps then, ‘belief’ and ‘believe’ are words with wider casual meanings than we tend to realize: catch-all words used commonly within sloppy articulation. Perhaps ‘belief’, in everyday casual usage, typically denotes all manner of mental acceptances, from the impenetrable castle battlements of ‘faith’ to the leaky tent-walls of ‘conjecture’. Just as ‘things’ and ‘stuff’ often denote not things at all but events and activities, as in, “What have you been doing lately?” whose reply might be, “Oh, just web-surfing and blogging; you know, stuff like that”.
How often do you use this, “I believe such and such”, when you might actually mean, “I’ve heard such and such, and it makes sense to me although I can’t precisely prove it yet. But it makes enough sense to me that the likelihood it’s true is now one of my opinions”?
Think about it.
Worse, isn’t much easier to use “I believe”, even though it’s not merely imprecise but out-and-out incorrect?

Would we better served if English had a single word, corollary to but not synonymous with belief, that means, “I find such and such a plausible proposition; not enough to accept it as an absolute truth but enough so for it to seem equivalent to a scientific hypothesis and/or even a theory.”

To be fair, it’s unsurprising that no such word yet exists. English, after all, evolved within a worldview wherein explanations for being stemmed primarily from religious sources. Belief then, with its lack of any need for empirical support, was the only pertinent manner ‘mental of acceptance’ for most of English’s evolving existence.
Times have changed however – or people have, at least. We no longer depend upon religious authorities for explanations of existence and for a sense of humankind’s importance in the cosmos. The religious worldview, long unchallenged, has eroded steadily as a new worldview, the scientific, has gained widespread favor for its reliance on empirical investigation and for its painstaking attention to means of verification prior to offering its findings.

The religious worldview, however, is hardly admitting its failings and gracefully retiring. Religionists instead have become antagonistic to science and to the scientific worldview. Because science cannot find evidence to support belief in the existence of deity, or for all manner of religious doctrines, religionists attempt to discredit science. Because the lack of evidentiary support for ancient religious beliefs and dogmas casts deep shadows of doubt over the religious worldview, religionists demonize scientists.

Worse, and wrongly, they project their own manner of mental acceptance – belief – onto scientific understandings of the world. Language, as implied in the preceding paragraph, accidentally assists this mistaken conflation. Religionists mischaracterize scientists as ‘priests’ of a new and yet godless religion, while language again accidentally assists the mischaracterization.
Thus millions of people view scientific findings with deep skepticism if not outright incredulity, despite these findings, in the main, being convincingly verifiable and accurately predictive.

In sum, the scientific worldview, despite being little more than an innocent aggregate of millions of mostly valid fact-sets and explanations that ‘approach objective truth’, is under siege by the comparatively irrational worldview that preceded modern science for most of human existence. Scientists aren’t fanatics (though religionists, projecting their own personas onto those they deem their ‘opponents’, contend otherwise). Scientific training teaches thoughtfulness and respectful discourse: scientists are not, in the main, a truculent lot. Scientists (ideally) neither proselytize nor preach—they report and teach. This non-confrontational manner of communication effectively makes scientists poor defenders of their findings when faced by the irrational and fanatical religionists who besiege the scientific worldview.
Scientists must therefore defend their findings and worldview hampered: with the proverbial ‘one hand tied behind their backs.’

The problem, however, is merely perceptual. And conceptual.

It’s not simply that lay-people aren’t trained to parse the differences between unverifiable beliefs and evidentiary supported theories and hypotheses, although this is surely an element in the mix. The fact that our language is anchored in words whose meanings are not only imprecise but anchored in an ancient belief-dependent world means something more and worse. It means that conflating the scientific community with priesthoods comes naturally to those of us using the imprecise conceptual system of English.

And why not? If scientists have no better word choice than “We believe in the Theory of Evolution”, it is automatically equivalent to a priest or pope’s “We believe in the God revealed to us in the Bible.”
The tragedy, of course, is that scientists don’t believe in the Theory of Evolution; they subscribe to it because it consistently and accurately predicts and analyzes the current condition, ongoing changes, and evidence from the past of life on Earth. It requires no ‘leap of faith’, as does belief. It requires no ‘conviction.’

How much better would the scientific worldview be understood if its representatives could convey that scientists don’t ‘believe’?
From Dictionary.com:
‘worldview’
n. In both senses also called Weltanschauung.
1. The overall perspective from which one sees and interprets the world.
2. A collection of beliefs about life and the universe held by an individual or a group.
‘Worldview’, in today’s toxic mix of conflicting cultural milieus, isn’t trivial. Far from it. The religious worldview finds the scientific worldview offensive and amoral—if not outright immoral. Especially the Abrahamic religious worldview.
Every rational person, even non-fundamentalist ‘believers’, have a stake in this cold war between religious belief and scientific inquiry.

The religious worldview relies on the credulity of under-educated human minds.

The scientific worldview relies not on divinely imparted scriptures or dogmas, but on empirical investigation and exhaustive experimentation.
Science doesn’t ‘proselytize’, it reports.
It doesn’t ‘believe’, it theorizes.

Meaning no. 2 then listed above, requires a different fourth word. It requires a word that does not imply ‘mental acceptance without need of empirical support’ but instead means quite specifically ‘provisional mental acceptance—based on empirically obtained evidentiary support’.
Not ‘beliefs’.

So here it is, sitting unnoticed right under our noses for decades (from plausible: ‘seemingly or apparently valid, likely, or acceptable) –

plause: noun, 1. a tentative mental acceptance of an assertion dependent on empirically obtained evidentiary support; 2. an opinion, open to disconfirmation, that certain sets of empirically obtained facts form a credible basis for eventual designation of the phrase ‘approaching objective truth’; 3. a mental acceptance of a probability deduced from empirically obtained evidentiary support; 4. ‘a plausible reckoning’ (informal)

plause: verb, to tentatively or provisionally accept an assertion or conclusion deduced from empirically obtained evidentiary support

Example: “I simply can’t plause that humans, as Richard Dawkins believes, are merely ‘machines’ whose sole purpose or function is the replication one’s personal genome. My understanding of biology precludes the probability of Dawkins’s reductionist analysis.”

And: “It is one of my plauses that the so-called Big Bang is likely only one of many such phenomena that emerges every now and again from the fabric of space-time—and I have tentative evidentiary support for this plause.”

OR, “It is one of my plauses that the Earth and the other planets of the solar system revolve around the Sun, and that the solar system in turn revolves around the galactic center.”
Not controversial, that – at least not in this day and age. Indeed, it’s a valid enough notion to stand as ‘effectively proven’.
So why not make that sentence’s sixth word ‘beliefs’ instead of ‘plauses’?
Because belief denotes a ‘mental acceptance of a claim as truth’ that includes no threshold of evidentiary support.
It’s long past time to wholly cede the word ‘belief’ to the human fascination with religion, superstition, and conspiracy theories.

People in favor of reason, it’s high time for a new concept reflecting the truth of how we actually think.

I expect to have to edit this post, perhaps several times.

Read Full Post »

This post is incomplete, but made available as a work-in-progress preview for my pal jazzman. Updated at 11:25 PM, PST…and again at 12:50 PM, PST…

Ambiguity vs. Clarity
A Reconciliation of Meaning

1. Jazzman, our impasse over the propriety of the ambiguity of the common usages of ‘belief’ doesn’t stem merely from a matter as trivial as dictionary definitions – although an observing third party might think so. It’s more than that: it’s about the “quiet universes” of personal meanings “that can be probed but not occupied.” (I stole that prose, as you’ll shortly see below…)

On and off for the past many weeks, I’ve been stumbling my layman’s way through a terse but illuminating book called How Brains Make Up Their Minds by brain scientist Walter Freeman. You’d like him: he’s among those who dissent from the sociobiologists and evolutionary psychologists. One early sentence is simply this: “I hope to encourage the belief that people have power to make choices.” It’s there to distinguish his conclusions from the “ultra-Darwinists” (like Dawkins, Wilson, and Dennett) who seem enthralled by the fatalistic notion that we’re little more than automatons owned and operated by our genetic legacy and by the biological impulses that legacy has ‘hard-wired’ (now there’s a phrase I’m sick of reading in biology!) into us.

Freeman’s book details what the neuron clusters of the human brain spend so much time and energy doing: creating meaning. Meaning, he indicates, is the brain’s first business, because without the ability to create meaning, communication through speech, or even body language cues that infants are first exposed to, would be – well – meaningless.
On page 9, he writes, “Meaning is a kind of living structure that grows and changes, yet endures.” He then spends much of the book explaining how an individual’s brain’s ‘plasticity’ gives rise to the individual’s aggregations of meaning.
“The dynamics isolates the meaning in each brain from all others, endowing each person with ultimate privacy, and loneliness as well, which creates the challenge of creating companionship with others through communication. I call this condition ‘epistemological solipsism,’ to conform with the philosophical term for a school of thought that holds that all knowledge and experience is constructed by and within individuals. (This view differs from the extreme called ‘metaphysical solipsism,’ which holds that the whole world is a fantasy of each individual.)” (Freeman, pp.9-10)

“Meanings have no edges or compartments. They are not solely rational or emotional, but a mixture. They are not thought or beliefs, but the fabric of both. Each meaning has a focus at some point in the dynamic structure of an entire life. Meaning is closed from the outside by virtue of its very uniqueness and complexity. In this sense, it resembles the immunological incompatibility of tissues, by which each of us differs from all others. The barrier between us is not like a moat around a castle or a fire wall protecting a computer system; the meaning in each of us is a quiet universe that can be probed but not occupied.” (Freeman, p.14)

If the tone of my responses to you has seemed overly acidic, I apologize. What I recognize now from rereading the early pages of How Brains Make Up Their Minds is just how much emotion my own meanings involve. Our exchanges over ‘belief’ have been, I suppose, a mutual probing of our own ‘quiet universes’. In other posts, I’ve used the metaphor: ‘concepts = lenses’. Allow me to offer a telescope to you, allowing you – perhaps – if my lenses allow your eyes the same clear images they allow me – a glimpse into my quiet universe.

One of my universe’s central galaxies looks something like this: “I prize clarity over ambiguity.” But do the universes of others prize clarity as I do? Perhaps not.
I prize mystery in fiction – especially in fantasy, my favorite genre – but I don’t prize it in the ‘real’ world, since according to an analysis flowing from my ‘universe of meaning’, the world’s many mysteries have been ‘explained’ by ‘sages’ and ‘prophets’ whose teachings – grounded in the societies and cultures of their times – endure today in what my universe classes as inhumane attitudes, polices, and actions. Thus, when I want to feel the tingle of mystery, I typically retreat to fiction, be it writing or reading. Or, more frequently, into nature, which to me, even after absorbing the countless explanations of science, remains mysteriously wondrous, and wondrously mysterious.
But that very acknowledgment implies that mystery isn’t undesirable even to me. Many people, I opine, seem to prefer a significant sense of mystery in their day-to-day ‘real world’ lives. And since one of my universe’s most recently emergent galaxies reads like this, “refrain from judgment whenever you sense the urge to do it” (it’s harder to live up to than just that!), who am I to take issue with the preference for mystery evident in others?

2. Battling “universes of meaning”?

If meanings are ultimately local – wholly subjective by virtue of their isolation within the minds of the thinkers – then how can we communicate? Confrontation? I suppose it’s one way, and I suppose also our seemingly persistent impasse is at least somewhat confrontational. But I rather like the word Freeman uses, “probe”. We’ve been probing one another’s universes of meaning, and despite our impasse, I think we’re both actually finding that our “probes” are “returning to base” with valuable intergalactic – no, make that inter-universal – data.

I can’t say exactly how it developed, but one of the earliest ‘structures’ of meaning inside my little universe was this: the primary meaning of ‘belief’ is “the mental acceptance of the truth of actuality of something”. No, I didn’t own my American Heritage when that meaning formed, since it was imparted to me by parents and teachers at a very early age. My college dictionary only articulated it particularly aptly, according to the standards of my preexisting meaning. Moreover, my AHD’s other two definitions seemed only variants of that ‘primary’ concept. It offers no ‘burden of proof’ clause such as this from Merriam-Webster Online:
“3 : conviction of the truth of some statement or the reality of some being or phenomenon especially when based on examination of evidence”.

Jazzman, you’re already aware that M-W O’s meaning no.3 is similar (but functionally stronger in its sense of ‘conviction’) to my new concept of ‘plause’. I’m glad I wasn’t familiar with the M-W O’s definition prior to coining plause, because otherwise I might not have so easily recognized that one can proceed through one’s life and pursuits without convictions. The very existence of the forthcoming sentence’s words – or more specifically, the concepts they mark – are ample evidence that one can proceed provisionally rather than (to coin yet another new word) “convictively”.
‘Premise,’ ‘estimate,’ ‘preliminary conclusion,’ ‘reckoning,’ ‘suspicion,’ ‘impression,’ ‘inference,’ ‘inclination,’ ‘persuasion,’ ‘surmise,’ ‘conjecture,’ ‘speculation,’ ‘sentiment,’ ‘hunch,’ ‘inkling,’ ‘guess,’ ‘musing,’ and the fount of them all: ‘thought’.
Science employs a method that, among other things, is essentially the process of proceeding provisionally rather than ‘with conviction’. I’ve been trying out ‘belief-free’ thinking for the past 10 or so months, and haven’t fallen off the edge of the earth (yet). I am, however, the only person I’m currently aware of seemingly capable of conceptualizing ‘belief-free’ thinking. I very much hope that others discover the same freedom I’ve been enjoying. Yes, it has caused me to rethink my use of words, but that’s not all it is. It isn’t mere rhetorical trickery. Dependency on belief is optional, I’ve found, not mandatory. Otherwise, we could never proceed from guesswork, from conjecture.
And it isn’t necessary – ever – to convert one’s conjectures and deductions into ‘beliefs’ – i.e., convictions. The very decision to do so is tantamount to strapping oneself into an intellectual and emotional straightjacket.
All of this, of course, is valid in my personal, quiet universe of meaning.
Are my lenses working for you yet?

When we began this impasse some 10 or 11 months back, I can promise you I was desperately frustrated by what I interpreted to be a transposition of your meanings onto mine. Like an invasion of aliens from another planet (or universe).
Now, my memory might well be faulty. But it irked the living crap out of me that you seemed to have the temerity to tell me that my understandings of my own ‘personal universe of meaning’ was simply wrong. In response, I’ve been trying to undermine the certainty from which your opinion of my ‘wrongness’ seems to spring.
I’m fast reaching the ‘I no longer care’ stage, however, because in my efforts to analyze and critique the common ambiguous meanings of ‘belief’ that begin at the hard right wall of faith and spread to the left, toward the uncertainties of guesswork and all its close synonyms, I’ve gained increasing confidence in the overall desirability of my arguments.
Notice I used ‘desirability’, not ‘correctness’, or one of its synonyms. ‘Desirability’ is subjective, not objective.
I consider my arguments desirable not only because I prize clarity over ambiguity. It’s not a purely personal whim. It’s pragmatic instead. It seems to me that communication between people’s differing meaning-universes is facilitated by clarity and hampered by ambiguity. As a writer, I pay attention the evolution of a words such as dashA short stroke of the pen or to rush? …both words come from the same source, Middle English daschen, which could mean both to strike and to rush. The earliest meaning was to strike something violently”. One spelling, two words. One word, two distinct meanings. Two distinct meanings that have descended from one simple ancestor-word.
Fluent English speakers and readers can discern from context the meaning of ‘dash’ the speaker or writer intends. However, if the communication over the course of our impasse indicates anything, it sure should indicate that I can’t always fluently discern what meaning you intend from your usages of belief. This isn’t mere boneheadedness on my part. It’s a structural difference between our two meaning-universes. The ‘natural laws’, as it were, in your universe differ from those in mine (you really should give Lee Smolin’s books on physics a spin).

Now, I’m not saying that your meanings are in the minority. In fact, I’d wager that in contemporary English, more meaning-universes resemble yours than resemble mine. The question though isn’t whether mine is ‘right’ and the others ‘wrong’, it’s whether one is less ambiguous than the other.
I’m arguing for increased clarity, based on refining meanings (or by “more finely grinding our conceptual lenses”) already common to our language. I’m arguing that although ‘belief’, (mental acceptance of the truth or actuality of something) CAN ALSO be understood as “BELIEF – may or may not imply certitude in the believer (my belief that I had caught all the errors)” – (M-W O) – should it? Why should one meaning imply certitude while another – a very common usage – explicitly doesn’t?
Is ambiguity in this instance a source of ‘mystery’, or a source instead of confusion?

Dictionaries can’t very well control meaning. Instead, they report the usages of words in the years just preceding the publication. They give ‘photos’, if you like, of the most common patterns of the millions of meaning-universes alive as human minds. Perhaps instead of photos, the images (definitions) are really impressionistic montages. And perhaps then, the errand I’ve selected for myself is futile: like a single microbe trying to remake the image of a painting in the Louvre. Or like a gnat hoping to change the eating habits of an elephant. A fool’s errand. Well, if so, that’s fine. I’ll smash my tiny gnat’s head against the wall of elephantine skin a few more times before tiring, and retiring from it again, as I did a couple of months ago.

Yet if I can persuade even just one person that common usages of one of our language’s most important and frequently used words, a word that denotes at least three distinct concepts, promotes deception instead of honesty, it will have been worth it. The butterfly effect may well be bunk in the natural earthly realm, but maybe, just maybe, within the human communication realm of the internet, it can yield, against the odds, a change.
From ambiguity to clarity? Hell, it’s worth a try.
So thank you, jazzman, for lending me your mind’s ear.

I’ll try to work this into less rambling, more cogent prose when time allows

Read Full Post »

Older Posts »