Rigid Definitions and Rigid Description

Words appear as spoken by people and as transcripts of a spoken word. A printed book is a transcript, as is a digital recording or synthesized speech! I shall focus exclusively on transcripts of the spoken word and specifically on English texts.

If W is a word, the rule appears to be that W may have at least two manifestations: (1) it is sounded in a certain way and (2) it is used in different ways on different occasions. If a word has multiple uses it follows that is has several definitions and some — if not all — should be registered in a common dictionary. A dictionary however is not expected by its users to exhaust how a word is used, or has been used — or how it will be used in. Dictionaries are very modish.

What a dictionary most often does is to list how a word is currently used. I invariably look up the dictionary’s date of publication and when it was last revised. “Currently” indicates that the uses cited refer to the use on the date of publication. It therefore has explicit historical parameters: by whom, when, where, and under what range of circumstances the word is likely to be used! Most dictionaries do not do so, but assume that the user does their homework and fills in the missing blanks! “Likely” tells us that the editors/compilers of the dictionary cannot be sure that they have identified all the current uses of a word. Fair enough — let readers and users be warned. They should not conclude that all that could be said about a word has being said within the covers of the dictionary consulted.

A dictionary lists words and their possible use in a possible world but it does not deal with the nature of the possible world itself. It describes (sort of!) how others tend to use a word or expression. Dictionaries are not handbooks on cosmology. Each dictionary also “dates” the usages of the words it has assembled. This is done by including in the exposition different uses of the words at different periods but also by its date of publication. Few people refer to the latter!

A dictionary does not usually state whether the definiens is a description of a phenomenon as well as an explication of the term being defined. The definiens is therefore neither a causal nor a phenomenological analysis of a word or a concept. The compilers are not legislators who intend or are empowered to prescribe how a word is to be used, or proscribe its uses although their editorial actions may have the unintended consequence of laying down in many minds what the “proper” use of a word is. Let the user therefore beware! He/she is not bound by the definition offered: it is only a recommendation. However the compilers may feel obliged to emphasize what is currently in widespread use and thereby indirectly promote and reinforce an existing social preference.

But if the definiens is neither a causal or a phenomenological analysis of a term or concept, what constitutes a causal or phenomenological analysis? Both of these are concepts belong to a different order of events and do not take the form …= df … but, propose, take the form … = xp… The symbol *xp* is new and will be clarified below!

Clarification of *xp* — a new symbol

The symbol *xp* is short for *explicate*, that is, to clarify something referred to. It is sometimes used as an alternative for *explain* but that is not how I propose to use it. Rather, I propose to restrict *explicate* to the idea of giving a logical analysis of something.

Now it is in the nature of the case that whenever one offers to explicate something, defined as developing the implication of an idea, to analyze it logically, one does so by drawing an invisible curtain around it and stating — in some manner — that of all possible things that can be meant by the idea, one wishes to clarify a particular meaning, aspect or interpretation of it. One assumes that the term has multiple meanings, and one lays these out as best one can. Although one is free to offer more than one analysis of a term, this option is usually rejected and the effort is focussed on only one.

This is a mistake: all options should be on the table before one explictly favours one and rejects others. Rejection does not mean “incorrect”, but “not favoured now, within the context of the present discussion.”

We will use X as the symbol for what is being explicated
XP as the set of statements submitted in explication.

XP may refer to “conditionality”, or refer to a condition of use, but it may also be mainly descriptive. The latter refers to the idea that one can talk about an event by describing it in various degrees of details.

Take a scene like the arrival of a steam locomotive drawing a set of carriages into a small local railway station. The year is 1899! There is no doubt that ideally one would submit a set of photographs to depict the scene. Each viewer is then free to translate what he/she sees into words. Some may submit florid descriptions; others may use sparse and few sentences. Provided each also includes the name of the railways station we can be more or less sure that they are describing the same scene. It does not mean that everything that can be said about the matter has been or will be submitted!

Therefore XP does not exhaust X: it clarifies but does not give exhaustive descriptions! Furthermore, XP does not (and therefore is not designed) to create a rigid definition (in the sense discussed by Kripke (1980) and others (see Laporte, 2012 in the Stanford Encyclopedia of Philosophy). On the contrary, X remains open, pliable, although it has been limited to some degree by the explication, XP, given. So we need to distinguish between a definition of a term and a description of what it could cover. It is not difficult.

In a definition we start with a word which is unknown to us, whose use is therefore unfamiliar or which is ambiguous to us. We therefore ask someone to gives us a definition, a statement to tell us how this word is used by its speakers. The sign =xp therefore indicates that we are dealing with a clarification, not an equivalence.

One can give different clarifications and therefore treat the set of these clarifying statements as “more or less equivalents”! This explains why the same word is given different definitions in different dictionaries without causing a general strike amongst dictionary users.

A description, on the other hand, refers to any effort to state in words and sentences what some person has seen (experienced) about an event he/she witnessed or has knowledge of. Descriptions come in different degrees of details and authenticity. If I report what several of my friends have told me about an accident in Mulberry Street which occurred some time ago in Fort Myers, the credibility of my report will be quite low, whereas if I added that I had witnessed the scene myself in part, the report might be given slightly more credence!

So descriptions are rated (usually silently) by using mostly implicit criteria to gauge their credibility. This is quite unlike what happens to definitions. I have previously suggested that we dimensionalize descriptions by talking about their rigidity: accordingly, a detailed description is also rigid, or becomes increasingly rigid. This rigidity also makes it easier to contest against competing accounts of what happened in Mulberry Street.

In so far as XP is time-bound (historically restricted) it is part of an effort to create (by an unknown group of individuals!) a particular universe, a wished-for closed system which portraits a possible universe, but not the only universe! Even if we make the dictatorial move to declare that this Universe which is being described is the final universe, we acknowledges that “final” is one of a finite series with antecedents. It could therefore have additional successors — it is possible. See also Steven Weinberg’s illuminating discussion of this and related issues in To Explain the World” (Harper, 2015) and his earlier Dreams of a Final Theory: the scientists’s search for the ultimate laws of nature (sic!) (Vintage, 1993).

More on Rigid Definitions and Rigid Descriptions

A rigid definition — not to be confused with a rigid descriptions (Kripke, 1980) — stands in contrast to a fluid or porous definition. A definition is rigid when the words that defines another word (or expression) — the definiens — is designed to be unique and therefore cannot be substituted by other words. Does this happen? Indeed, quite often and increasingly so because our language contains many more “technical”, “scientific”, “modish” or “proprietary words”, i.e. names of patented processes or of legally protected products than, say, 25 years ago.

However, we often overstep the bounds of legal use and employ technical terms in a nontechnical manner. Users, when they do so use silent, invisible quotation marks to indicate that words have more than one meaning or reference. This effectively undermines rigid definitions. No matter: the consistent effect of rigid definitions is to limit — but also to impoverish — a language immeasurably. Language probably emerged as a within-group behavioural device which effectively permitted a group to communicate their “mood” about the safety of their immediate habitat and only later became an adjunct to “grooming behavior, a means for in-group consolidation. (This is of course pure speculation !!!)

To insist on rigid definitions is like extracting all teeth instead of undertaking a measured pace of saving what needs preservation and leaving well enough alone. The pain, discomfort and loss of function is most likely far too expensive, too much to pay for the accrued benefits.

In any case, we are not in danger of dumping our current language habits in favour of a total makeover, as was once proposed, although one should remember that such was advocated for almost 70 years by many philosophers of science throughout the last century, by many who identified themselves as members of the Vienna Circle, or Logical Positivists or Logical Empiricists (e.g.Bertrand Russel). Wittgenstein, one of the earliest advocates of this position renounced it within ten years of publishing his Tractatus Logico Philosophicus (1921) which had explored the consequences of applying the strict methods of empiricism to the language of science. Instead he advocated that we overcome misunderstandings induced by loose language habits, by resorting to a more careful analysis of how we use language in different settings. This agenda involved drawing implicit demarcation lines between different uses and functions of language.

The work of Karl Buhler(1867-1963), sometime professor of psychology in Vienna in the early 20th century was critically important in the next step of moving from a monistic position regarding the role of language in human affairs by viewing it as a multifunctional activity. He viewed language as having three roles: an expressive function, a representative function and a conative function (in the sense of “motivation”). Only the second of these had been addressed by philosophers of science (e.g. E. Mach), namely its representational (referential) role.

Buhler’s students included the philosopher Karl Popper and the founder of ethology, the biologist Konrad Lorenz, but it was the former whose influential views on the nature of the scientific enterprise had a profound influence on the relation between the acts of discovery and the emergence of theories about how discoveries become integrated to generate and form a succession of falsifiable hypotheses and “theories” about the wider universe, i.e. cosmologies. He did not address questions about the expressive or conactive functions of language. In retrospect in appears that K. Lorenz and his group of animal behavioral analysts (also known later as “ethologists”) were primarily interested in viewing within and inter-species communication as attributes whereby a species facilitated its survival.

Philosophers of science, generally speaking, have focussed on the second role of language listed by Buhler, how it serves to represent cognitive experiences. One should not overlook that every formulation of ideas — which ultimately finds expression in a language and mathematics — and which the representational function of language develops quite slowly, tediously, and depends initially on borrowing from the position of others (from fellow “citizens”) about what they themselves are due to experience. It is comparable to an 8-year-old child being asked to look through a microscope for the first time and told what he/she is about to see if it guides its sights in certain prescribed ways! Without such guidance most would see very little as they stare through the lenses!

And how is this guidance done? By “instructions’ which are easily understood because these rely on a lot of previous experience of being guided to a correct — often rewarding — outcome. So discoveries often — but not only — take place in an environment which includes some form of language and ipso facto occurs within a cultural context. Monkeys — and others — alert each other, their group, by shrieking, wailing, screeching — but do not give a careful description of the intruders they have sighted and presumable their fear — whereas humans do so with considerable panache!

In summary: If all our words were subject to rigid definitions we would need to increase our current treasury of words enormously, but we would also lose our ability to express our flights of fancy in words. We know much about “flights of fancy” in humans — but have reasons to be skeptical whether this occurs with equal frequency or élan in dogs, cats, or lice.

The notion of “incomplete descriptions” acknowledges that whatever particular description is available, could perhaps be bettered, improved upon, extended, or could be added to. The understanding is that should this happens it would not radically alter the “story” of the event or its narrative features. Thus it may not be materially relevant what colour the hats and dresses of the persons involved in the accident were since accidents are commonly defined by injuries sustained by those involved, not the damage to their clothes!

One could argue that not all definitions exhaust the meaning of the word which is being defined, but — one the contrary — it is rare to find such exhaustive definition. We chose an easier way. Our objective is to identify salient features of a situation or an object.

This argument is based on a widespread approach that the function of communication is to outline essential features of a situation, not to give a minute description of one’s personal experience to others. If details are missing, people will demand that such errors of omission are rectified! In most situations the opposite is true: if one loads a description with details which others believe — rightly or wrongly — to be excessive, the chances are that our report will be overlooked in favour of those which are more pointed and brief. It is the listener who decides whether the information is sufficient. One is best served by using tools suited to a task!

One could add that if words are meant to have distinct, unique referents — a common approach to object-words — then it is possible that several of the words in use refer to the same objects, and are therefore redundant. Question: does every distinctive phenomena have to have, or should have, a distinct, separate label, a name? What is the relation between naming a phenomenon and the phenomenon? Are labels independent of the phenomenon?

Admittedly there are cases where it is highly desirable that an object or situation carries a singular name. Example: we use a variety of devices which measure time, and we have have invented a string of labels for these,e.g.water-clocks, spring-clocks, atomic-clocks etc. There are also designations which focus on other characteristics: pocket watches, grandfather clocks, kitchen clocks, etc. It is perfectly reasonable to ask whether a particular kitchen clock is electrically operated, a wind-up device (spring driven), a pendulum device etc. Everyone involved has agreed from the outset that it is reasonable to ask such questions — and also other questions — without necessarily knowing why each question is being raised. We say “Such questions may be pertinent but are also appropriate under some circumstances”. These questions are a way of averting the alternative — horrible thought — that a description must contain and include from the outset the kernels of all questions that could arise about the object/event!

In general, scientists prefer to label all “known” objects and phenomena and they do so under the impression that the name will continue to be “proprietary”, limited to that object or class of objects. Historically, this is not what happens. Names of objects have a tendency to migrate, grow in the realm of application before these wander into adjacent territory and finally prove of limited use to their “sponsors”. There are exceptions — but these are few. I doubt whether the name of a chemical compound would get co-opted as the name of generic product — but it could happen.

Associative Nets

I am most grateful to Brian Kennedy for his detailed and insightful reply to my earlier blog Are there infallible facts. I’ve already followed up with the response Empirical vs. Ex Cathedra Solutions. Here is a second follow-up, selecting other points made by Brian, which certainly have taken the discussion beyond the limits of my earlier piece.

Brian pointed out that I had “exposed… that the noun *claims* is (often) subjected to more weight than it can bear”, but he also rightly pointed out that this is quite common in everyday language. He also makes the valuable point that ordinarily words reach over to connected with others, as when the term *claim* is associated with claimants, liabilities, and assets (especially in a legal context), although it is quite acceptable and common to use each of these terms without making explicit reference to any of these. Elsewhere — and more recently — I referred to this as cases where words are part of an associative-net.

brain-netWe can, for example, ask others to “free associate” by giving a starter-word only and asking someone to come up with as many words as possible for the next few minutes. In many cases the string of words each person presents during such serial association have significant overlaps. Wittgenstein talked about this phenomenon as terms having a “family resemblance”. It underscores that words or phrases should NOT be viewed in the manner suggested earlier, as independent items — a view proposed, for example, by an earlier Wittgenstein (c.1921, the author of the Tractatus) and also by members of the Vienna Circle (later knows as Logical Positivists), but that words reaches out to others, as it were (see Wittgenstein post-1940).

For example, we understand that *claim* as used by lawyers and accountants is viewed as something owed (liability) or owned (asset). Furthermore that where there is a claim there is also a claimant, a party making such a claim, whether as an individual or a group of people. Thus there have been two greatly opposed positions. One states that each word should be viewed as a separate, independent event, an item which is clearly demarcated from others. The second position argues that words, in general, are members of families, have resemblances and acted on behalf of other members of their family. A useful analogy to those of us who have families!

I generally take the position that in ordinary language we are best advised to assume the “family resemblance” stance. Most referential words and phrases follow the prescription that there is more than one meaning to a words and/or phrase, as demonstrates most clearly in any etymological dictionary, or in Roget’s Thesaurus. In current-day formal sciences, however, the rule is that each word or phrase should have a rigid definition (although this rule is often broken within a relatively short time). These two rules clearly conflict. But it is like a tennis player on a badminton court: he either has already learned separate sets of rules and uses these appropriately, as the occasion demands or, if he fails to do so, he will surely be asked to leave the court by some imperious judge!

Let us briefly look at a family of adverb-adjectives which are often reified and which thereby get transformed during this process. Thus *truly*, *true* and *truth* (see example 1 below) represents such a family: these items have family resemblance, i.e., they are related in meaning, but have different grammatical status. However *truth* — the third of these — overarches, literarily speaking: it is an abstract entity relative to the other two terms, and “sits above” them, as it were.

Example 1
Item: truly, true, truth
Over-arching: truth

Example 2
Item: factually, factual, fact
Over-arching: factuality

The further examples that follow are similar but include terms often used in philosophical discussions and are “noun-words”, which appear as if these are names of objects — which they are not.

  • really, real, reality
  • necessarily, necessary, necessity
  • logically, logical, logic
  • infallibly, infallible, infallibility

Why say, “facts [as items] are claims to truth [in the over-arching sense]”? Why not say instead, “truths [items] are claims to factuality [over-arching]”?

How is the *facts/factuality* cluster to be distinguished from the *truths/truthfulness* cluster? Which of these two clusters (if either) is logically/definitionally dependent on the other? In what do these dependencies consist? Do these two clusters constitute a distinction without a difference?

Furthermore, whatever a decision is reached in these matters, one should remember that these have a limited time during which they operate reliably. Our efforts to catch the world as “experienced”, on the fly, is a game with rules that we now know changes as we play.*

*Imagine playing a set of tennis during which the rules change! This has mattered little in the past, when games were long and players generally speaking did not outlast the set, as they do now. And don’t let’s even get started in games which we believe will continue “for all time.”

Facts as News Items

Comments and Commentaries: The blogs in this series were written over a period of three years. Many of my ideas shifted significantly during this period but instead of revising everything and forcing it into a common mold, I decided to let matters stand as first conceptualized. All comments are loosely connected by my interest in the idea of a “fact” or “what is a fact,” seen from a historical perspective.

It often pays to look at what a dictionary says about a word, especially one as widely used as *fact*. In my experience a dictionary may carry conflicting meanings, and this – as I discovered – applies with special force to *fact*. The reason is clear: *fact* is a word used on a daily basis to underwrite and support opinions strongly held and presumably as props for opinions which cannot easily be justified by the “common man.” (The *common man* here refers to everyone in their relaxed, uncritical mood.) The reputable, much-used Merriman-Webster Dictionary (on-line edition) has the following entry for *fact*:

Fact: noun. A thing that is indisputably the case. Information used as evidence or as part of a report of news article. Synonyms: reality – deed – actuality – truth – case – circumstance.

The suggestion that fact is a part of a report in a news article came as a total surprise to me. Which part? In a news article, perhaps, as published in a typical daily newspaper? My philosophical head also spun on seeing the synonyms listed, for synonyms cannot automatically be substituted for one other, to serve as alternatives, without changing at least somewhat the meaning of the expression in which they are used. If this were the case, it would defeat the purpose of what most people regard as the primary role of a dictionary: to give a clear explication of a word: no ambiguities, please! What is indisputable about the Merriman-Webster entry is that the word *fact* serves as a noun – but this is not a part of its meaning, and only identifies its grammatical status.

Many, but not all, news articles (whatever these are) report matters that are poorly substantiated and plainly not even truthful. Such articles often omit critical information and may include easily-correctable errors, such as an incorrect date. Since newspaper and magazine publishers are for-profit organizations and are driven by the need to provide an service to an information hungry general public, their dominant goal is profit, not a service to others.

Furthermore the quality of reports may vary greatly over time, as the history of the venerable London Times has demonstrated. Could the Times possibly become part of the yellow press and cease to cater to supposedly well-educated men of industry and senior civil servants in the UK, the empire builders and financiers of yester-year? Not likely. Yet it could certainly change the quality of its reports in order to fit the temper of the times. Indeed it has done so quite deliberately.

And what about those many matters which were once widely and commonly regarded to be well-substantiated, solid facts, but which lost their certainty, and thus their sheen? Facts, as we have learned, can be children of fashion, and fashions change at an alarming rate. Surely readers should be told how and when such changes are made (and they never are!). I would love to see a subheading of a New York Times article say, “Read with a pinch of salt,” or “This item may be too vinegary for some.”

Truth to tell, many journalists invent and fabricate facts – or at least put their own spin on factual raw materials – and the conclusions to be drawn from such imaginary facts. Are there perhaps advanced courses in journalism at universities and business schools which teach people how to turn facts into plausible fictions, and vice versa? Such course may attract large enrolments.

Of course, I am not suggesting that all is rotten in Fleet Street or similar pockets of the news industry. To my delight, I discovered some time ago that in several countries the quality of reporting and the standard of commentaries about newsworthy events continues to be very high. Standards of excellence are also maintained in review articles of books, the theatre, films, art exhibitions, concerts and most certainly in political commentaries, regardless of which side of the spectrum editorial sympathies lie.

Would it be invidious to single out places or specific papers? Most of us confine our news-reading to local papers – many now free and doubly dependent on advertising revenue. These litter public transport, lie around on streets and coffee houses. They are understandably sparse in writings for intelligent people. But cities like in Zurich, Basel, Berlin, Munich, Vienna, Prague, Hamburg, Copenhagen, Amsterdam, Brussels, or Stockholm, to name a few in Europe, continue to value breadth of coverage and quality of writing. Not so throughout the USA, which continues to be poorly served by its daily newspapers but are better served by weeklies and monthly magazines. Different countries, different cultures; different attitudes of what is news, to what the public is expected to know, and what are fair commentaries about events.

Ah well – why complain when more and more of us can access internet services and spend free moments watching awesome “telly” and the products of Hollywood and Bollywood.

In summary: facts are not news-items, but are claims that a statement could be true. It depends on how stringent are the criteria for “referential truth”. This is a topic which will be discussed in more details elsewhere, in other blogs of this series.

Empirical vs. Ex Cathedra Solutions

Blog reader Brian Kennedy began a comment – written in April, 2013 – with an apology for arriving late to the discussion on “fallibility.” It is now my turn to apologize. I started my reply several times but failed to meet self-imposed standards for clarity. My reply is spread over several entries done to preserve the blog-format which insists on short communications! Each reply is separately titled.

eelsFacts and their properties have occupied my thinking for much of the past year. I have suggested that our current term *fact* needs to be supplemented by terms that express the idea that many facts have lost their credentials and have been dishonourably discharged: what these assert no longer is so.

A famous case is the discovery (c.1886) that eels have sexual organs, i.e., reproduce in the “normal” manner! The discovery was consonant with Darwin’s view that fish reproduced sexually and were not worms reproducing through “spontaneous generation,” a view advocated by Aristotle two thousand years earlier. This discovery meant that statements supporting spontaneous generation as a mechanism for generating new life forms were considerably weakened to the point of extinction; these now joined the ranks of “factoids,” a part of dead science. In short, spontaneous generation was not an option to account for the emergence of new species. One-up for Darwin’s wild speculations?

Here then is a model for the transition of statements which describe the world in empirically false terms and how a new body of knowledge takes over. It wasn’t that the new theory was correct, but that the old one was faulty. Let’s not overlook that spontaneous generation was a plausible theory at the time – but in the end it was inadequate and rejected. Time to open a new bottle?

In summary, the challenge faced by biologists at the turn of the 19th century was to discover evidence which either supported an older theory of speciation (sanctified by Aristotle) or to find evidence which contradicted a theory proposed by Darwin and others, that speciation was an ongoing process powered by a combination of mutations in cells (about which very little was known) and the adaptation of such mutants to their ecological niche. These were different but complementary tasks: find supporting evidence for two conflicting positions about speciation and/or find evidence which contradicts one, or both, of the theories advanced to explain the large variety of species found and the source of their often small inter-species differences.

At the time these matters were debated microbiology and especially cell biology was in its infancy, half a century away from the great breakthroughs in the late 1950s. The initial problems were set by conflicting theories formulated during a time of inadequate, sketchy knowledge. We have here a case-history demonstrating how these problems were approached and resolved, step by step, through empirical investigations.

But the history of our knowledge about the world also records many cases where solution were adopted by ex-cathedra means, that is by declaring a solution to a problem which was based on arguments from first principles. If there was disagreement it was based on how well deductions had been made from the assumptions adopted. First principles are assumption which are not themselves directly challenged, but are assumed to depict a state of affairs either on the grounds of self-evidence, or because they appeared the best ones – the most rational – under the circumstances.

The most persuasive case of solutions reached in this manner is given by the proofs of Euclidean geometry, which assume that space is best represented by a one-dimensional linear surface. All the conclusions reached by Euclid and his successors hold when applied to what is basically a “flat earth” model: the conclusions do not hold for concave or convex surfaces, i.e., not for globes. The assumptions that the earth is flat, that the earth is stationary, that celestial objects move relative to the earth, that the movement of celestial bodies are uninfluenced by their proximity to the earth, that light and sound travel through a medium and specifically that light travels in a straight trajectory etc. were not questioned until much later.

When these assumptions were challenged one by one, exposed to experimental investigations, it also marked the end of solutions to problems using the ex-cathedra approach. Of course deductions from first principles were valid when done strictly according to the rules, but the deductions themselves could not answer questions about what was in the universe to start with and how things worked in the “post-creation” period! Such questions demanded empirical solutions, the use of investigative methods.

Once it was accepted that empirical investigations could reveal new facts it opened the door to the (dangerous) idea that old facts could be tarnished, even faulted, that new discoveries could be superior in some degree to old facts. To which old facts? All or only some? Those facts declared to be so by the first layer of assumptions made? A dangerous idea.

cometThe history of comets is a case in point. Comets had been reported for thousands of years by both Eastern and Western sky-watchers, but were thought to be aberrations from a pre-ordained order of things,that portended unusual events, like the birth and death of prominent people, (e.g., Caesar’s death, Macbeth’s kingship, Caliban’s fate – Shakespeare was well versed in the Occult), or generally boded good or ill. But where did comets come from, and how did they travel in the sky? This was a specially difficult problems to answer if one assumed – as was common for thousands of years – that celestial bodies travelled around the earth on fixed translucent platforms, perhaps on impenetrable glass discs, each “nailed” permanently to a wall in the sky.

There were other assumptions involved, layered at a different, more basic levels, for example the assumption that whoever created the world (the great mover, as assumed by some early Greek philosophers) this god must have created everything according to a perfect plan, using perfect forms e.g. perfect geometric forms and patterns.

Such assumptions had to be jettisoned before one could consider alternatives which dispensed with the notion that (a) there were perfect forms, (b) the creator had to use or was likely to use perfect forms in constructing the world from nothing or very plastic raw material, (c) anything imperfect had to be an illusion, a distortion, aberration and therefore was unnatural! Comets, according to ancient astrolomer/astronomers, priests, and others, were not natural phenomena at all, but a species of divine interventions in the normal, divine order of things. Divine aberrations.

The last few paragraphs illustrate quite graphically what I have tagged as ex-cathedra procedures, and demonstrate how a naturalistic philosophy (based on the assumption that knowledge by empirical discovery is superior to knowledge derived from first principles) works.

The issues have been debated by metaphysicians for two and half thousand years, a period for which we have far from complete records. But I suspect that it is not debate alone that is decisive. As our understanding deepens through debate, we have to remember that, underlying these, we see sizeable shifts in the way we see matters as a whole. Problems once vigorously debated have a history of passing into history and in current terms get exhaustively archived “in the cloud.” Future generation can read all about them, but we in the present can only speculate what such solutions will look like. The past is not a good guide to the future.

Character of Science

Science should not be likened to a bound hard-cover volume, a collection of unchallengeable, incontrovertible truths. It is more like a loose-leaf folder in which our latest insights into nature, into aspects of ourselves and the accumulated wisdom of past learning are stored.

This creates a highly correctable collection of items, not a book of ultimate truths. Our folder has inestimable value in a world which too often is haunted and harassed by self-righteous humans touting their own brands of Truths and Virtues.

I should add that although the collection itself consists of items we may regard as self-evident, it also contains much that is highly speculative. To sort this out is a daunting, unfinished business.

Meta-Logs and Meta-Signs

airquotesBy a meta-log I mean a written device which identifies a word or a phrase for a comment and does not involve the meaning of the word or phrase. In a sentence this means that the subject of the sentence is not what the word denotes, but that the word is an object for discussion. It can occur when I talk with others about how a particular word or a phrase is being used in ordinary speech, or in writing, as when an author is discussing how a word or a phrase has been used by another person.

The prefix *meta*, was used by Aristotle (c. 360 BC) to refer to a topic in retrospect. *Metaphysics* therefore was a discussion about material matters he had studied, a reflection about its first principles. It could be translated as “after physics”, or “thinking about physics”. Over successive the centuries the meaning of the word Metaphysics shifted substantially so that most people now equate it with “Pure Philosophy” as distinct from its specific areas, say, Logic, Political Philosophy, Ethics, Aesthetics. *Meta* as here used has been combined with *log*, as in *meta-log* and has been assigned a distinct, unambiguous meaning: a way of identifying words, phrases, even sentences when these are being talked about, or discussed — that is, are the subjects of a thought — and are not used in their normal, denotative way.

A problem arises however, when talking, speaking or lecturing, and not writing particularly in the latter situation, when lecturing. The speaker may wish to talk about a concept and in order to retain the interest of his audience, he raises or lowers his voice, adjust its volumes, perhaps also gesticulates in various emphatic ways: raising eyebrows, wrinkling his nose, looking skywards (a sign of desperation perhaps) and — using gestures like marching up and down the stage, or raising his hands to show the importance of what is being discussed. All these are aids which he hopes enlivens his presentation, give emphasis to particularly ideas, maintains the interest of the audience. We can refer to the lecturer punctuating his material, bringing it to the attention of the audience, making certain items more memorable.

One widely used gesture which shows that something is being quoted from another source, is to raise one’s hand, extend two fingers and rapidly flap these. These are air quotes (shown above) — visual quotation marks! It shows that the words are a quotation, or refers to a concept which is widely used but which does not necessarily meet with the approval of the speaker himself. For years I used this sign whenever I used the word *instinct*, a term anathema to me at the time. The term is almost without meaning within contemporary academic psychology and, although widely used by the English speaking followers of Freud, it is a poor translation of the German word *Trieb* which means an impetus, or inchoate driving force. The term *instinct* on the other hand, as was used by some biologists at the turn of the 20th century, referred to an action pattern which was believed to be unlearned — that is, its origins were unknown — but is within the repertoire of an animal through a genetic (again, unknown) process. It was re-introduced by Konrad Lorenz and N. Tinbergen in the 1940’s but in a different sense to its earlier use. This simply added to the confusion. Tinbergen often referred to innate action patterns although his famous book was called A Study of Instinct (1951).

But we also need a gesture to indicate that a term or expression is not a quotation from some existing text, but is a topic under discussion. If the subject refers to a fictitious event or something which is likely to confuse an audience, it has been customary to place it in gestural-quotes. My favorite example: the bandersnatch ( see Lewis Carroll’s Alice in Wonderland). Once an audience accepts the bandersnatch as a fictitious animal, which has been given an incoherent description in the poem “The Jabberwocky”, there is no need to place it in quotation marks, or give it any distinctive marking: the reader and an audience is now well prepared and is therefore unlikely to get confused. But when one wishes to discuss this “animal” and others of a similar nature, the discussion becomes “meta-talk” and one may wish to indicate this by using a distinct marker — or meta-log. In writing the meta-log I have suggested on an earlier occasion is *x*, where x refers to the object under discussion, whereas when an idea is involved which requires more than a label but an expression, we can use *xyz*. Time will tell what conventions will get adopted.

In speech, or when speaking, I propose to use a sign which is easy to use to indicate that we are in “meta-talk” mode, that is talking about a concept. The sign involves raising both hands in clinched form to shoulder level, then quickly stabbing the air with both forefingers thrust forward, like an adder on the attack. The gesture is the same whether using a single word or a whole phrase. The audience will very quickly learn what meaning is being conveyed and distinguish a quotation from a comment about the status of a concept!

Quotes, Italics, Capitalization and Asterisks

factThe problem I am currently facing is very common, and the two solutions I offer may appear retrogressive and not welcome. Nevertheless, I urge that my second suggestion be adopted.

The problem arises when one wants to refer to a word as an item in a language regardless of its meaning. In my case, I am currently writing about the role of language in the genesis of theories about natural and social phenomena. It often requires that I talk about specific words and phrases, how these get to be used in different circumstances and contexts. The word that comes up most frequently is in fact, fact — a highly ambiguous word with several different meanings. In the past, whenever I wrote about facts, what these are, how this word is used in different circumstances, I adopted the common convention of using italics, but more recently I adopted the habit of capitalizing the first letter of the word under discussion, as in Fact or its plural form, Facts. I found this to work quite well.

But capitalization has its own problems, as can be seen until recently in German, where there are many rules as well as a great many exceptions to them. It took German children much time and effort to learn to spell correctly — many did not succeed, and often failed school because they were deficient in orthography! Foreigners just got overwhelmed, got lost in the fray and were readily forgiven by natives for their orthographic transgressions. This changed in 1996 when the German government decreed that it was only mandatory for all nouns to be capitalized, as well as personal pronouns as a matter of courtesy. Of course not all previous conventions were set aside, but much became discretionary rather than decreed. The young celebrated, older people more often bewailed what they perceived as a decline in standards. And there the matter rests.

In English – as just demonstrated – matters are not entirely simple either. The personal pronoun, I, is capitalized, but not others. When writing about the queen, write Her Royal Highness Queen Elizabeth, or about The Lord Mayor of London, Sir Alex Appleby, given that names should be capitalized, as in other languages, but also any titles that go with names! Writing sir or mr. Alex Appleby would be orthographically unacceptable in English. These conventions are easily learned and are not cause for being failed in examinations at school or university, as once was the case.

In English, then, only the name of a person, as in Jon Henry, or Queen Elizabeth, God, the personal pronoun, special names like the Ministry of Transport, companies, compass directions and the names of continents, countries, counties, provinces, cities, hamlets and, of course, the first letter of a new sentence need to be capitalized. English has other orthographic problems made worse by the fact that there is so little relationship between the pronunciation of a word and its spelling. Some writers, like Charles Dickens, simply wrote in imitation of sounded speech – which often makes it awkward to read. Speed-readers beware.

My suggestion here is that when one writes about a word, the word referred to be presented in one of two alternative ways: by capitalization, so that instead of writing that the use of containers should be restricted to containers used to transport bulk, we write that Containers are used to refer to containers employed in bulk transport, not to items like individual pots and vases. Similarly, we write that philosophers use Facts in different ways when this refers to data and not to the confidence someone has in a record of a particular event.

Why not use italics rather than capitalizing? As a matter of fact, this is often done. We then write that philosophers use fact in different ways, including to refer to data but also to the confidence one has in the reliability, or the accuracy of the record made of an event! However, this proposal adds confusion given that we already have conventions for using italics, such as to add emphasis to a word or phrase, that work quite well. Extending their use, therefore, confuses rather than clarifies.

A different way of solving my original problem is to use asterisks to mark the critical word – or more importantly – phrase. Write *fact* when discussing this word as a word, not its meaning. Similarly enclose a whole phrase when the phrase is to be discussed without regard to its specific meaning, as in *on the wrong side of the road*. This phrase has two different meanings, namely, travelling on a road but not on the side prescribed by law. This is its literal meaning. Its other meaning is analogical, namely that one is oppositional, acting contrary to what is generally done. The use of the asterisk helps to discuss the phrase without arguing about its proper use.

Time will tell which of these two options will be adopted. In the meanwhile I may try out which one suits me best. I hope others will follow my suggestions – and that they will let me know.