Dedication

As most of you know, our beloved mentor Dr. Harry M. B. Hurwitz passed away in August 2018. We have come across the following note, dated June 2011, and would like to share it with you.

I recently wrote a set of essays, each of which I dedicated to a particular friend. He is one of many no longer here, but like others he lives in full colour with me, in my memory. Those friends who are still alive may not recall me with equal vigour or affection as I do them, but this does not matter since affection does not demand reciprocation. Furthermore memories are fickle – one person’s belies that of another. In many cases I have outlived my friends by happenstance, with the aid of good genes and excellent doctors. But in other cases death comes unannounced and unexpectedly, leaving families and intimate friends distraught and diminished.

I miss my friends and would have wished them everlasting life. But wishes are not a factor controlling the rhythm of our existence. One learns rather painfully to be content with memories and to celebrate these often and on appropriate occasions.

Dr. Harry M. B. Hurwitz — June, 2011

We miss you, Harry.

Clarification & Definition

The question “what does a philosopher do” or the complementary question “what does a dentist do” are not answered by giving a narrow, restrictive definition of the two critical terms needing definition, but by offering a clarification of the meaning of a particular unfamiliar term or phrase.

ClarifyTo do so effectively may require that each term is placed in as many contexts as possible. It woud demonstrate to a foreigner unfamiliar with the term, its breadth of use. The foreigner may then search within their home-language for comparable term which is similar in sound and thereupon take the plunge aware that he/she may indeed have guessed incorrectly!

Both terms cited are names of occupations — what a person does in their occupational life during working hours. An answer would therefore consist of a sentence or two which identifies what each group of people carrying the label do as wage-earners.

There is considerable room for errors when answering each question. Yet, within limits, any answer given would be open to correction or modification. (Note: We tend to be tolerant towards outsiders when it comes to language-use — we forgive them their trespasses!)

In what way does a clarification differ from the definition of a term? Every clarification is an attempt to explain to someone who admits that they do not yet understand the meaning of a term, in what way the unknown, unclear term is used by others who — it is assumed — are already familiar with it and its common (even several!) uses. We asked the question originally because we realised at that moment, we are not yet privy to the term as it as being used by others. In truth, we wish to participate in a conversation and therefore realized that to do so we have to understand what others are saying, that we generally also use a context to facilitate or aid our understanding what partiucular events are being referred to, what is being named or what quality of an object or event is currently the focus of interest for others, the current centre of their concerns.

Clearly offering a definition to someone who has actually asked for a clarification may help that person, but the definition may itself contain elements which are not understood by the interlocutor. He/she may come back to request further clarifications and may do so until every term used in a definition is understood or until the definitional sentences themselves are fully understood.

Furthermore, when people ask for the clarification of what is for them currently an unfamiliar word, term or expression, they expect us to stake out the characteristics of what is confusing or unfamiliar to them! Only then do we say, “I now fully understand” (and often also add a sight of relief)!

Much of what we say in an explanation will be quite clear except for a particular (target) term or terms. A definition, then, may help us to some extent, but not on all occasions. The puzzling term may already be familiar to someone, but not the context in which it is being used on this particular occasion. Of course, people bring a vocabulary to every discussion (unless they happen to be foreigners whose language has no overlap with the language being used). If a language of words is unavailable to parties of an exchange or conversation, such people would indeed be severely handicapped and may be forced into exchanging even elementary ideas like *right* or *left*, *up* or *down* by resorting to other forms of commuication than words, much as earlier European explorers did during voyages of exploration and discovery in the Americas and in the Far East during the 15th and 16th century. These voyagers employed gestures or even acted out their ideas, wishes and proposals!

Gestures between humans has always been helpful but does not promote discussions about ideas. Modern humans live in an environment which may be described as consisting of references to objects and items which are products of human invention and whose uniqueness is given by their appearance and functionality, that is, by our ability to have learned how an object is different in kind from another by virtue of its context.

A prime example which comes to my mind is the ubiquitous button or switch whose functionality is associated with what operation it was programmed to control. The button on my electric dryer is the same as on my car — but its functionality is totally different and non-comparable. Much depends on the preparedness of the questioner to be taken into a field of knowledge with which they are already familiar, for which they may aleady have even a rudimentary vocabulary.

The contemporary world today is so stocked with “knowledge” about diverse matters that most of us are truly ignorant, although many are prepared to learn and to add to both our existing knowledge and vocabulary! Ignorance can be remedied and more people than ever are prepared to do so than in the past. We have all somehow learned that errors and absence of knowledge is too widespread and often astonishingly common, so that it is our individual responsibility to remedy this (lamentable) state of affairs whenever possible. Many of us do.

We do not usually add to our knowledge of things and events by learning (memorising) definitions. Learning definitions — whether by rote or in some other way — has its uses but it is a method useful in specialised contexts only. Most of us learn to offer a definition upon request, e.g., the definition of a soup-spoon, in contrast to a tea-spoon. The definition of the latter would not be covered by “a smaller version of a soup spoon”, or “a spoon used to stir a tea-pot”, whereas “a spoon smaller than the normal soup-spoon and used in a variety of situations where a small spoon may be useful, like eating a cup of berries, would be useful.

In short, a clarifications serve to help us learn the meaning of a term and includes that one learns in what context the troubling word is used most frequently, but also when it is used rarely. A definition of a word is more narrowly aimed. First and foremost it serves the purpose of informing us about a word’s restricted reference, that any word may have a widerange of uses and meanings. It is a more advanced undertaking which often requires that the learning of the new word also involves how to use it figuratively, that is, analogically.

Part 1: Science does not prove — only declares

Note: Herewith several related comments on the role of “proof” in the contemporary natural and human sciences. A critical comment on the title of a recently-published book by A. Aczel — which carries the confusing title Why science does not disprove God (2014) — is also included. The first of my comment deals with what it is that “scientific activity” produces. The heading “Science does not prove — but only declares” summarises my conclusion. In brief, that the outcome of scientific inquiries are a series of declaration on our current knowledge about “our world” which together constitute a comprehensive representation — a momentarily authentic picture of what the world is like.

columbus-egg 2What is commonly understood by *submitting a proof* and its opposite, *submitting a disproof*? When someone submits a proof they are said to demonstrate to others how a conclusion they themself (privately) reached was obtained. More specifically, how they arrived at the conclusion by entirely logical means, and not by empirically demonstrations.

If they told us that they just “felt that the vase they had unearthed” was a Greek urn which had once contained the ashes of a fallen warrior, we would call it a “guess” but not a true discovery unless the claim was supported by much more evidence or provenance! If they “show” that something they had foretold had materialized, like that a gesture made towards heaven produced a hail of manna, that is not a proof, but only a demonstration: it shows that their prediction on this particular occasion worked! Such predictions were once made routinely by reputed “wise men” but none have been recorded reliably for the past few hundred years.

A proof, on the other hand, refers to a post-facto event which states that whatever was initially said about a matter followed logically from some earlier explicitly-cited assumptions. For example, that 2 of anything added to 3 results in 5 items. In this case there is no doubt about the existence of the numbers cited, or that the number 5 can be generated in a different ways. The assumptions may not be empirically true — often they are not! What we have here is a calculation which involves abstract, not empirically true events. Two goats standing in the meadow and three sheep grazing nearby make five animals. Contrast this to the claim made that “When I put a match to this spout of a bottle a flame will emerge”. The answer to the question “how is this possible?” will require, amongst others, a reference to specific, well-attested laws of chemistry, rules about what substances are flammable and which are not.

A proof, in short, refers to the outcome of clearly-stated logical operations. Such operations are traditionally performed only by humans — although many psychologists and biologists have argued that it is also found in some non-humans, but only in those species whose nervous system have similar features to ours, containing, for example, neural circuits, a hemispheric brain, cortex, areas which have become centres of control for specific outcomes or operations.

Much has been written and speculated about the relation between the brain as an cohesive organ, as a processor of information and how such information may eventuality translate into states of awareness and also of actions — but it is an ongoing, not a completed story — part of a book with many chapters of which only the first few have been written so far. The future, we predict, will surely offer many additional surprises, and these will be related to the fact that with time and much effort we may get to know more and more about the functional and structural properties of the brains of different species.

One enduring (and thus far unsolved) problem has been to account for corrections which are made by an individual member of a species as a result of their experiences in the past — and how this could be forwarded (transferred) to their descendents to facilitate the behaviour of unborn generations. Are there some aspects of our experiences which are coded so as to become transmittable from generation to generation, just as many bird-song do? The empirical answers to such questions will most likely emerge within the foreseeable future, but in the meantime we can only create increasingly better and superior questions and suppositions of what goes on within creatures which reflect changes in their daily lives, specifically how they come to predict some future events on the basis of their earlier experience or perhaps even by virtue of cross generational transmissions. I could imagine, for example, a mechanism whereby a set of experiences could be transmitted to several future generations so that traces of former experiences would wane and disappear. One may need to exclude carry-overs from the immediate past because these changes may only reflect temporary matters, which traditionally were covered by the term *habituations*, i.e. transitional intra-organic changes which left very few — or minimal enduring residues, for transfer to off springs.

When I state — as in the title of this blog — that “Science does not prove but only declare,” I mean that the fruits and outcomes of scientifically conducted investigations take the form of declarations which one has presented to oneself. Modern science is a communal activity whose traces are found in a group of cohorts and which usually demand that anyone who makes and accepts a new claim can and will defend it publicly in person — as at a scientific conference — or by circulating a documentary report of their investigations in a publicly available journal where it can be criticized by others!

One communicates one’s claim by issuing statements, which may contain abstract formulae that summarize both what one has done to secure the information, but also what one has concluded from such earlier work. It is a declaration of the truth as seen by oneself which is made publicly known so that it can be openly viewed and, if so deemed, criticized! The declarer admits that they could be mistaken about some or even all the summary conclusions presented, but hopes that little of what they claimed has to be withdrawn or revised as a result of criticism.

The popular statement that the “proof of the pudding lies in its eating” is therefore incorrect. The proof of an argument in particular lies in the correctness of its logical derivation, something which requires that the steps taken accord to well-stated rules. The rules predate investigations, research. One assumes that all the assumptions made in an extended argument are necessarily correct, and do not contradict other explicitly made arguments. To be correct therefore assumes that what a statement declares is independently defensible, and therefore does not depend on the correctness of an individual’s perception only. It is assumed to rely on the verification by everyone involved or concerned with the argument, that what has been claimed can also be independently supported by applying common method to the claim.

For example, several claims have been made throughout the last 1500 years that the shroud in which Jesus was wrapped after his body was taken from the cross — data which is not doubted by most — was subsequently found and is now available for public display and examination. However, each shroud so far examined (there have been several) has failed to stand up to all the tests applied, including tests of their reputed age. Thus the hypothesis that the original shroud had been found has not been supported, and cannot be affirmed with confidence, but seems to be based on a wish to believe that such a shroud exists.

Of course, such wishes have no permanent place in scientific investigations but have to be abandoned regardless of their origins. (I’m sure the priests in Egypt believed their stories of the origins of humanity, just as the early priests of Judaism believed in their, may I add, fanciful account of the origin of women! Evidence cited to support a “position” is often viewed as a distraction in such cases since nothing is stronger than the wish to believe.

My preference, therefore, has been to view each declaration in Science as a temporary, time-bound claim only. All these and similar claims are ultimately disputable — and are more than likely to be. The claims may therefore need amendment(s) or may de facto be discarded under the heading, “was of one-time interest because its claim accorded with other plausible pictures or representations available at the time.”

Remarks on Empirical vs Ex Cathedra Solutions to Enduring Problems

“Facts and their properties” has occupied my thinking for much of the past year. I suggested earlier that the current term *fact* needs to be supplemented by terms which express the idea that many matters which were once considered as factual have lost their credentials and have since been dishonourably discharged.

A famous example of this was the discovery (c.1886) that eels possess sexual organs i.e reproduced themselves in a “normal” manner by sexual coupling, a discovery consonant with Darwin’s view that fish reproduced sexually and were not worms reproducing through spontaneous generation, which was the view advocated by Aristotle (c 330 BC), two thousand years earlier. This 19th century discovery meant that statements which supported “spontaneous generation” as a mechanism for generating new life-forms were weakened to the point of extinction; such statements therefore joined the ranks of “factoids”, as part of dead-science. In short, spontaneous generation was not an option which could be summoned to account for the emergence of new species.

Here then is a model for the transition of statements which describe the world in empirically false terms and how, at some later stage, such statements are replaced by a new body of statements, by new knowledge. It is a complicated process which often meets fierce resistance that is by those who have been entrusted by their follow citizens to guard our hoard of “knowledge”, like the Nibelungen’s guardian dragons protecting the golden treasure of the Rhine. Old treasured tales do not pass gently into the night but fiercely resist such attempts to demystify them. It isn’t that some newer theory is found to be correct, but that finally someone convinced others of the error of their ways, and the old theory was discovered to be faulty perhaps even in several repeats. Spontaneous generation, the pre-Darwinian theory used to account for new species, was a plausible theory at the time these matter were first discussed – but in the end it was deemed seems to be inadequate and was therefore rejected by the scientific community as a whole, and by those we had entrusted to safe-keep our knowledge.

In summary, the challenge faced by biologists at the turn of the 19th century was to discover evidence which either would suffice to continue it to support an older theory of speciation (as sanctified by Aristotle two thousand years earlier) or to find evidence which contradicted the theory proposed by Darwin and others during there first half of the 19th century, that speciation was an ongoing (contemporary) process powered by a combination of mutations in cells (about which very little was known at the time) and the adaptation of such mutants to their ecological niche. These were different but complementary tasks: (1) find supporting evidence for two conflicting position about speciation and/or (2) find evidence which contradicts one, or both, of the theories proposed to explain the large variety of species observed and the source of their often small inter-species differences. At the time these matters were first debated when both microbiology and especially cell biology were in their infancy, and were still half a century away from the great breakthroughs of the late 1950s. The initial problems were set by conflicting theories which had been formulated when knowledge about these matters was sketchy and mostly conjectural. Historically, here was a case of how these problems were approached and resolved, step by step, often secretively, through empirical investigations.

But the history of our knowledge about the world also records many cases where solution were adopted ex-cathedra, that is, by declaring a solution to a problem which was primarily based on arguments from broadly defined first principles. If there was public disagreements about these, it was based on how well and perfectly deductions had been derived from the assumptions adopted. These first principles, as they came to be known, referred to assumptions which were not themselves directly challenged, but which were assumed to depict and reflect an existing state of affairs on the grounds that these were self-evident to the theorist (the person who mattered) or because these appeared the best ones (as in the most rational) available under the prevailing circumstances to the writer and his friends.

The most persuasive cases cited from the past of solutions which had been reached in this manner were the proofs of Euclidean geometry. These proofs had been available to the educated elite of historical periods who had assumed that space is best represented by a two-dimensional linear surface. Thus, all the conclusion reached by Euclid and his many successors over the next 2,000 years were deemed to hold when applied to what is basically a “flat earth” model of the earth: however, any conclusions drawn did not hold for spaces which were concave or convex, i.e., did not hold for the surface of globes. The assumptions that the earth is flat, that the earth is stationary, that celestial objects move relative to the earth, that the movement of celestial bodies are uninfluenced by their proximity to the earth, that light and sound travel through a medium and specifically that light travels in a straight trajectory etc., were not questioned until the end of the 19th century. When these older assumptions were challenged and exposed to experimental investigations, this change in approach also marked the end of solutions to problems which used the deductive approach.

Of course, deductions from first principles remained valid when done strictly according to a priori rules of logic, but the deduction themselves could not answer questions about what composed the universe to start with, or how things worked during “post-creation” periods! Such questions demanded that one demonstrated that any claim about a state of affairs had been independently demonstrated, that there was a correspondence between a state of affairs as perceived and what was being asserted about it. Under some conditions the meeting of two points in space does not make “sense” and therefore needed to be viewed as an “impossible act” !

Once it was accepted that empirical investigations could reveal new facts it opened the door to the (dangerous?) idea that old existing facts could be tarnished, even faulted, perhaps that new discoveries could be superior to old facts. To which old facts? All, or only to some? Those facts declared to be so were supported by the first layer of assumptions made. It was a dangerous idea.

The history of comets is a case in point. Comets had been reported for thousands of years by both Eastern and Western sky-watchers, but were thought to be aberrations from a pre-ordained order of things, which portended unusual events, like the birth and death of prominent people,(e.g. Ceasar’s death, Macbeth’s kingship, Caliban’s fate — Shakespeare was, as is well known, well-versed in the Occult, as were many in his audience). But where did comets come from, and how did they travel through the (layered) sky? What propelled them and for what goodly reasons did they come menacingly across the sky ? It required special agents to interpret such rare public events. These were furthermore more likely to be messengers from the gods, and were therefore inspired seers and were believed (before the advent of Christianity) to be equipped to find answers to especially difficut questions! Thus if one assumed (as was common for thousands of years) that celestial bodies travelled around the earth on fixed translucent platforms – perhaps on impenetrable crystalline discs, each of which was “nailed” permanently to an opaque or translucent wall in the sky – this belief opened a number of possibilities to previously unanswered questions. (We create worlds for ourselves which make it possible to answer questions which bother us!)

There were other assumptions involved, as for example the assumption that whoever created the world (the great mover, as assumed by some early Greek philosophers) who must also have created everything perceivable in accordance with a perfect plan which employed perfect forms, e.g., perfect geometric forms and patterns. Such assumptions had to be jettisoned before one could consider alternatives which dispensed with the notion (a) that perfect forms existed since even before the world did ab initio or (b) that anything imperfect would necessarily refer to an illusion, a distortion, aberration and was therefore itself unnatural! Comets, according to ancient astronomer or priests and others, were not to be viewed as natural phenomena, but were viewed as unnatural, abberations, beyond what super-intelligent entitites would do, a species which could intervene in the normal, divine order of things! Thus our ancestors provided for the possibility that a construction could arise which was built by following imperfect rules of construction and which could by virtue of this also implode unexpectedly!

The last paragraph illustrates graphically what I have tagged as ex-cathedra procedures and has demonstrated how a naturalistic philosophy, which was based on assumptions that knowledge attained by empirical discoveries was inherently superior to knowledge derived or deduced from first principles. The issue is to justify why any choice would be superior in effect to its alternatives: it has remained a casus belli between different factions of metaphysicians for two and half thousand years – perhaps even longer. We have of course few records, if any, which would support either position wholly or fully. This may perhaps change in future as we inreasingly and assiduously store all earlier findings and speculations regardless of how well supported, and before we undertake the awesome task of assessing each on the strenght of its merits. There are no ultimate judges as far as we know who can undertake this task and also assume responsibilty for any final recommendations they may make!

The Fictions We Create 7: More on descriptive (empirical) and evaluative terms

This article continues an argument started in article #6 of this series, The Fictions We Create 6: Flawed diamonds —
description or evaluation?
.

Can *flawed* (in the sense that some items fall short of a proposed standard) and its antonym, *unflawed*, be viewed as descriptive terms? Both are used evaluatively and do not describe in the ordinary sense of that word (referring to features). We may talk about a diamond being “flawed” but mean that the gem has features which make it less than perfect. In the eyes of experts this may degrade its market value, but it does not determine whether the item belongs to the class of diamonds or is cut glass.

Similarly, a farm chicken which has lost its feathers — it was plucked by one of us! — is still a bird, although it is the worse for receiving such uncharitable, cavalier treatment. My argument hopefully is clearly Aristotelean since it starts from the premise that a bird is an object which is defined by a finite list of qualities which make it “bird-like”, which give it bird-status! (This is not how modern biologists view species!)

Admittedly, although “definitions” may be useful tools for sorting a heap of bric-a-brac into smaller managable categories, definitions of names should not be confused with efforts to discover why each differs from others, or what makes a bird different from a rodent! Nor should this be confused with a search for explanations, e.g., why a bird is what it is (or seems to us), how differences between events (and objects) originally arose and have come about, or what caused — in the sense of created — a difference between events.

These questions are historical and therefore should be answered using primarily historical methods. It requires, inter alia, that answers state how different (specificable) states changed over time and circumstances, what it was that specifically promoted such changes. It is a case where we wish to have knowledge about circumstance (specific and in general) which give rise uniquely or in general to such changes, as when we comment that “this stone moved its position since I last saw it!”. The answer to this particular quesion may be “Someone deliberately moved the stone up the mountain” or “It must have fallen or have been washed down the hill during recent rains.” Note, however, that in both cases the question of its change of location was answered by referring to an outside (i.e. external) agent of change — and not by reference to an agent, like some property of “volition” which is assumed to be general to all stones (!) or perhaps only to those stones with special markings! (The stuff of fairly-tales, where apple-pits or stones can turn into genies!)

Historical methods would only indicate how we moved from one conception of a phenomenon to another conception, whereas “causal” methods supposedly focus on how things work or on how things have come to be what they seem to be. Aristotle — and other thinkers of his period, including his students — raised these and related issue partly because they were firmly convinced that “change” in anything, whether of type, or movement, or of contingent features, reflect an unstable universe, that is, an imperfect world, whereas human reason revealed that it was our immediate perceptions that were variable, not the world as such! This was a major metaphysical assumption to make, one he had taken from Plato. Thus, prototypes (ur-forms) were stable whereas much of what we experience was regarded as ephemeral, even as shadows of the “real”, as representations. It reflects a “metaphysical stance”, a position based on our reasoning about matters which are given to us a priori.

The distinction made in earlier blogs between terms which serve to describe features of things and those which are evaluations of a feature of a “thing”, are critical for two entirely different reasons:

(a) Descriptions are used to identify features of events. It is not claimed that these descriptons are complete, and therefore form an exhaustive list, or that these are ordered in importance, but only state that each belongs to a list of attributes ascribed to a named thing. These may include reference to its relative distinctiveness, as when someone mentions that parakeets are “green-feathered all over” or that “Henry VIII in old age was bloated” — an empirical assertion which could be falsified and thereby eliminated from the list of “essential attributes”.

(b) Evaluations used to compare features of events as these stand to each other on some common yardstick. A “flawed diamond” for example selects a quality of a particular stone but does so both in relation to other stones but also by reference to a “perfect” or “ideal” one.

Aristotle suggested that we require a comprehensive inventory of things before we can inquire into the nature of each. He emphasized that things have properties which identify them in two ways: as individual items but also as a member of a class. Thus a thing may be a sample of a class, or it may refer to the class itself. There is a class of “man” but there are also instances, like “Socrates”. A simple but dfferent example: shoes are protective foot wear used by men, women, and children and are produced in all sizes as well as for each class of humans!

The Fictions We Create 6 : Flawed diamonds — description or evaluation?

Question: Is the universe — as conceptualized by earlier cosmologists — an entity which could be described as either perfect or imperfect? In the former case, it could be referred to as a “flawless” universe, or at least as a universe becoming flawless. Indeed, this is how it was viewed by many Western theologians for the past two millennia. But in as much as the universe was not without flaws, blame was placed on the iniquity of humans, not on its Creator. Not a convincing argument! One could reason that flaws in humans are due to how humans were “designed” (with what potential flaws?), or to the plans prepared for its creation, or to the original designer for having created a flawed species.

The complementary idea that the universe itself had flaws, that it is not perfect, has not been put forward by theologians. How would anyone find out whether this was true, or even reasonable? Assume, for example, that the “world”, or the “universe” was not flawed, but has detectable blemishes! This argument would be based on the premise that the initial forces of creation were faulty, or were deficient in some sense, that its most significant product, humans, were ill-designed — a view which has not found many, if any advocates! However, the notion that humans — or other creatures — were “designed”, is itself contentious and involves an odd use of the term *designed*. It raises the issue whether the term “designed” is appropriate when used in such a general, almost frivolous, and unrestricted manner. Is it not better to remove the use of the term “designed” from its traditional plinth?

Stated differently, the notion that humans were created according to a design has the logical status of an empirical hypothesis. It implies that a design for this particular species preceded its appearance. But this would apply to everything else, too. Evidence for such a hypothesis (or contention) is missing. One would therefore expect the hypothesis to die a natural death. It should be discarded in favour of a better, superior proposal! We have indeed waited already too long for this to happen. There are many now who would argue that it is time to re-state the original question and to do so preferably in a manner which makes it more readily answerable.

What kind of concept is flawed (or faulted)? It is certainly an evaluative term since it judges by referring to a standard. As commonly used, *flawed* is not a property ascribed to an event, but involves a criticism of it. It involves a comparison of two or more similar, related items with respect to a particular feature each of these displays.

A more familiar example than *flawed* is *tall*. It compares objects by appealing to an independent measure of height. *Taller*, therefore, refers to a comparison of the height between different, possibly even unrelated objects, as for example, “this giraffe is taller than this book-case”. It does describe an object, but does so by referring to a relationship between any two or more objects. To give an example: garments cover some of the surface of its wearer but garments need not have colour. Colour therefore is regarded as an “extrinsic feature” of a garment whereas one of its features — that it of covers a body — is part of its “definition” — or specification — as an object!

The comparison makes particular reference to the objects’ functionality, but avoids reference to its purpose, since objects do not necessarily have a purpose in the normal — metaphysically neutral — sense of that term! For example, a bookcase has no purpose, but it serves a need — specifically my need, or that of an organisation, like a public library which was designed and planned to hold and store manusucripts in book form.

On the other hand, a giraffe has neither purpose nor does it fulfill a human need (sic!) in the broad sense of that term. Only items about which we can say that these promote (serve some “self-interest”) and are agents on a mission, are said to have a purpose (in the strict sense of that term). What occurs for other reasons than human self-interest are viewed (by us) as “activated” by instinct, compulsion, destiny, some obscure entelechy, or by physical causes (e.g. a ball rolling down a hill).

Psychoanalysts have encouraged us for the past century and more to think of humans as partly activated by desires of which the actor is not necessarily aware or cogniscnant. Indeed the actor may advance reasons for actions which seem to any impartial outsider irrelevant. Thus, it often seems that actions cannot be explained in terms which the general public will accept!

Some philosophers (e.g. L. Wittgenstein, Philosophical Investigations, 1953; and Gilbert Ryle, The Concept of Mind, 1949) have argued that we have become increasingly confused by our own rhetoric, that we may say things without meaning them, or that since there is often more than one meaning attached to a word we may be focussed on a meaning which not intended by others! Wittgenstein even suggested that there is a cure for this malady, that a more careful analysis of how each of us uses language in everyday affairs may help us avoid “mental cramps” and dilemmas.

He may have had in mind earlier philosophers who made such outlandish claims as that time is not real, as is the philosopher who is uncertain of his existence and demands a logical proof that ideed he is alive if not necessarily healthy. The trouble is that the proposed remedy — the analysis of how language is used in specific cases — is not always successful in dealing with such philosophical general problems, and hence does not resolve them. It is also true that the counter-method which was designed to restore self-confidence in what one believes, has not been sufficiently and systematically used to have had a measurable effect!

Thus, we simply don’t know whether philosophical puzzles like “is time real?” — whatever these are or how many of these circulate — can be avoided or perhaps even cured by using the methods proposed by some philosophers — or whether such puzzles can be eliminated altogether. Language, it could be argued, is not a precision instrument at all, like a caliper, and therefore should not be used as such. Presumably one can invent, synthesize, and then prescribe a pill which will cure and medicate some mental cramps, but one cannot always persuade others to take the pills voluntarily. The fear of becoming disoriented as a result of accepting a remedial pill is often overwhelming and may block action by those who may benefit!

To return to our original problem which was to discover the meaning of *flawed*. Used descriptively, this word refers to a dimension which runs the whole gamut from “perfect” to “imperfect”. *Flawed* would then be the name of the dimensions itself. It refers to a a graded set of states, a continuous series, whose opposite (antonym) could be perfected. One would then speak about degrees of perfection, or its opposite, the extent to which something is flawed.

The same logic applies to *faulted*. One could add, that something is “greatly flawed”, but that one is thereby pointing to a new dimension — to a way of emphasizing its position within a recognized already existing dimension! *Greatly* — as in *greatly flawed* — would add an evaluative overtone to such judgement. (See my earlier blog in this series on descriptive and evaluative terms.)

The Fictions We Create 5: Descriptive sentences and evaluative statements

It is clear that the term *language* has a technical meaning but that it is also used to refer to what humans — but also some other species, e.g. monkeys, — do when they “chatter”! In the first sense, language is a form of communication viewed as acts during which specific messages are passed from one individual to others.

An example often cited in the past was to the “language” of bees, a field of research associated with the work of Karl von Frisch (Nobel prize, 1973) and his discovery that bees inform members of their hive of the location of honey sources by a dance. Here an individual message has content, that is, it is about something, refers to some distinguishable happening or event which may be significant or important to communicants. Yet every message may only be a constituent of a wider language. A piercing cry may be a message — as may be a whimper — but it is not part of a language unless there are additional messages which supplement such singular cries.

A language, in short, usually features many messages and — as we have learnt over the past few hundred years following the publication of several dictionaries — each human language shows that it can grow by leaps and bounds within a relatively short period. In short, a cry can become part of a vocal language, but may not do so consistently. There may be many “cries” which can become an aspect or feature of a language, but need not do so. (For example, “ouch” is a cry of pain, but it is in English.)

A human language has many features by means of which messages are passed from one individual to others of a group, from one to one or among many. Each message has some content — what it is about — but it also possesses a formal structure, which becomes part of its referential meaning. It is therefore incorrect to advertise only the semantic aspect of a term, since each message also occurs in a context which contributes to some degree to what is normally referred to as its meaning for both the sender and the recipient of the message. The prototypical and well-known example of how structure determines meaning is “the cat is on the mat” versus “the mat is on the cat”. There are many versions of this.

As already stated, language is a form of communication viewed as acts of passing messages from one individual to others of a group. But messages by humans are communicated not only by spoken words, but also by gestures, hand signals, etc. Indeed, many modern commentators refer to “body language” to convey that there may be additional meaning to a statement made in written or some other form, e.g., when a poet reads his work in public. Thus one may hear a comment that, “Mr. T asserts ‘xyz’ but his body language tells a different story.” It gives us a different story than if only his words were used to interpret his meaning.

Let us now clarify two terms which may make my position clearer, namely descriptive sentences and evaluative sentences.

Descriptive sentences refer to those elements of a spoken or written language which are presented in the form of statements whose function is to assert that some particular event, x, has a general quality, q. For example, “the sky today is blue” describes x by ascribing one of many possible, suitable, likely attributes to x. Important: the letter q does not refer to an evaluative property of the event (like “beautiful”, “desirable”, “ghastly”, “horrendous”) but refers to a feature which in combination with other features makes the event unique or different from other events, including what it is made of, e.g. a horse of flesh and bones, or a horse made of wood!

Evaluative sentences, on the other hand, refer to sentences of a language which ascribe value to an event, which order or locate that event on a scale of desirability, preferability, usefulness, etc. These are all qualities of the object which reflect its value but does not add to its description. Such evaluations are invariably relative to some other, perhaps comparable, items or events. These therefore do not describe the object by reference to its so-called “defining” properties. To refer to a vase as “in the shape of a bottle with ugly decorations” is a mixed description — and in that sense is no more helpful than talking about it as being “shaped as a bottle with additional decorations”! By adding that the decorations are “ugly”, the object is classified as existing on a scale of values which reflect something about the speaker and not the the object! It is similar to asserting “I love dogs but not cats”, which states a personal preference and clearly has nothing to do with either dogs or cats: in short, being liked by me is not descriptive of either entity or object!

We therefore divide our world into (a) items, events, situation which are describable, can be described, in contrast to (b) rating each of these on a scale which reflects our personal appreciation and reaction to it. We customarily talk about (a) as involving objective description and (b) as involving our personal, subjective, reaction to some event.

In some sense, therefore, descriptive statements may be true or false, whereas evaluative statements only reflect the views and opinions of the speaker, namely, his or her preferences and opinions. To refer to them as being true or false violates the rules of language use. The statements themselves have no epistemic validity, since this quality of a statement cannot be tested without reference to “data”.

Comment 1: Facts and “to factulate”

From fact to factulate; from verb to verbify. Ugly, but legitimate.

It pays to look at what modern dictionaries say about words which are in common and in wide use. Once again I looked up *fact* in the reputable Merriman-Webster Dictionary (on-line edition) and found the following entry:

Fact: noun. A thing that is indisputably the case. Information used as evidence or as part of a report of news article. Synonyms: reality – deed — actuality – truth – case — circumstance.

Note that the dictionary defines *fact* by citing how the word is commonly used but also by citing explicitly some of its synonyms. The effect is to create an environment, i.e. a context, whereby each word is related to all others in the selection by indicating what choices are available on each side of the divide! It leaves the decision of what to do about the choices open to the user: the user therefore remains entirely responsible for making the correct or appropriate choice from the array of “equivalences” offered.

This matter had already been discussed in a different context more than 60 years ago by Lee Cronbach and Paul Meehl (1955) in the context of “psychological measurement” (to which I propose to return in a future article). If one does not understand the positive options offered, one can at least infer the meaning of a particular term chosen by referring and comparing it to what it can not possibily mean! Whittling down a meaning by eliminating those deemed unsuitable? This seems a plausible strategy to success: if one does not know the meaning of a term in advance, it can often be guessed by eliminating it from those one already knows.

What puzzled me about a dictionary definition — but also appalled me — was the suggestion that a fact could be viewed as “part of a report of news articles!” I assume the term “news article” refers to articles published in established newspapers, possibly weeklies? Which? The reputable New York Times, the Guardian or the now ill-reputed Daily Mail (which was recently “banished” by Wikipedia for its habit of publishing unsubstantiated and unfounded “news reports” — as has been done in the UK’s Daily Mirror and News of the World for decades! These are a small selection from a world-wide set of dailies).

My philosophical head also spun when I discovered that far too many of the synonyms listed in the Merriman Dictionary can be “substituted” by changing the meaning of a part of the sentence in which these occurred! It just will not work since the sense of a sentence is then highly compromised — even lost — when this is done. As soon as one recognizes this to be the case, a person will withdraw the particular attempt and will substitute another synonym. I assume that there is experiemental evidence to support my fantasy? What I have described is a process of extremely rapid substitution based on one’s “unconscious recognition” of what is being done.

What seems indisputable, however, is that the word *fact* — a word we all love to use(!) — gets used exclusively as a noun. If, however, it is used as a verb is it referred to as *to factulate*? Has anyone used fact as a verb, on the analogy of changing the noun to a verb, perhaps to the verb *to verbify*? They should feel free to do so — to create what sound like “monsters” — if we claim that people “make facts” or “shape” these from non-factual materials!

There are precedents: *water* is a noun; *to water* is a verb in wide use. Is it an alternative to “spreading or distributing water”? What are acceptable limits to doing so with any noun?

Why not “verbify”?

The Fictions We Create 4: Our Inside and the Rapidly Expanding Outside

We have come to accept that the inside world is large, although ordinary people do not have a very large vocabulary with which to report “inside” experiences. They cover this by saying “I think”,“I feel…” ,“I sense that…”. In other words, most people tend to leave the description of their “feelings” to our poets or song writers/musicians!

We are, in fact, demonstrably more adept at describing the outside world — our common world. Some say that this is so because we live in a materialistic culture which is primarily focussed on the world around us. (Thus culture plays a “shaping” role).

However, both “domains” — the inside and the outside — are currently expanding rapidly, the latter at a faster rate than the former.* What do we do to meet our needs to express and refer to these changes? We create additional (new) term/ words and expressions which mark and label different events. Here the term creation has four references:

(a) Inventing new sounds (or symbols which substitute for sounds) to be used routinely in an existing language. The new sound — a complex event — is then assigned an “official meaning” either by fiat or later by including it in a current (on-line?) dictionary (which is a relatively new invention! — see the footnote** below);

(b) We borrow already existing words from a foreign language (e.g. Greek ἰδέα idea “form, pattern,” from the root of ἰδεῖν idein, “to see. Oxford English Dictionary 2014) but import into the host-language, only one one of its several meanings from its original, its donor-language (see earlier articles on neolidesm).

(c) We transform an existing word which has been selected from within our home- language and “quietly” assign an additional meaning to it by “analogy” i.e. by referring to its likeness/similarity between it and its new “reference”. In an earlier blog I have called this an “analogical spread”;

(d) We adapt an existing word in our home-language by changing its context of use. Using this method we assign new meaning to many old, well-worn words. It is sometimes the origin of what are now referred to as slang words, but not its only source, which I previously referred to as a special case of neolidesm. (See ** footnote below.)

* I have some reservations about this statement. One could argue that much of the “inside” worlds get expressed in expressive modes of popular culture, includes its songs, dances, arts.

** A brief statement about dictionaries taken mostly from Wikipedia:

Dictionaries go back several thousand years, but the “The first monolingual dictionary written in Europe was Spanish, written by Sebastián Covarrubias’ Tesoro de la lengua castellana o española, published in 1611 in Madrid, Spain.

Several attempts were made to produce a reference book to serve the current use for English speakers, but it was not until Samuel Johnson’s A Dictionary of the English Language (1755) that a reliable English dictionary was produced. At this point dictionaries had evolved which also featured textual references for most words, and their listing was arranged alphabetically, rather than by topic (a previously popular form of arrangement, which meant that all animals would be grouped together, etc.). Johnson’s masterwork was the first to bring all these elements together, thereby creating the first dictionary to be published in “modern” form.

The Fictions We Create 3: Describing the Inside and Outside

Common sense distinguishes between objects that stand on the “outside” of a room or enclosure, and those located “within” a room, or a box, or a carton or an awkwardly configured shell/enclosure. Thus objects are invariably assumed to occupy space. Amongst their many diverse properties which is that objects are located somewhere, in some place which can itself be described and conceptualized, as “a vessel 20 leagues under the sea”! To be in space assumes therefore that one is located “within” or “outside” an enclosure.

Thinking outside the box

We speak routinely about “locations” but — as we shall see — this is speaking figuratively and metaphorically. By contrast, when we discuss our feelings about an issue — e.g. about a neighbour who has just won a large prize in a lottery, or the person who called the police to report an ongoing burglary of his home, or about the doctor’s report that someone known to us has contracted HIV — we not only report the bare facts of a case, but also refer to how these event affected or influenced us as individuals — specifically, how we feel, or retrospectively felt, about what happened on a particular occasion.

Of course, we usually or normally do not receive two reports, one of a particular event and the other of our feelings about it, although on many occasions we get these “mixed up”. Thus we may say, “I’m sorry that I appear all excited and wound-up but the following happened as I walked to the Forum — Caesar was killed by Brutus, and by a host of others!”

So humans, as a rule, report both what they have experienced but also how this experience may have affected them. In our culture we are also trained to be clear that reports and narratives can be both about “objective events” and about how these events influenced us “personally”! We learn therefore how to present “what has occurred out there, somewhere” and “how we (I) feel about a matter” in a clear, possibly concise, manner. The common distinction is therefore between (a) “objective” events and (b) events which “impact” our feelings and our personal reactions to such events.

We refer to these latter as “subjective reports” — and in this matter ensure that these are excluded from being “objective” or “scientific”! We assign to such events a special status. But there are exceptions, as when a person is totally focussed on events taking place “outside”, even when these are far in the past.

It would be a gross error of judgement to describe such events in “personal” terms, like the eruption of Mount Vesuvius in 70 AD which destroyed the Roman city of Pompeii. Not quite! It would also be inappropriate for a historian or archeologist to do so, but not for a writer of fiction, a novelist like Lord Lytton, or even of a stray eye-witness. (Were there any surviving witnesses?) In other words, we ourselves choose whether to treat an event as objective or in a personalized manner. In summary, there are times when we have the opportunity and the “right” to choose between these two options, whether to see and present ourselves to others as impartial witnesses, or as involved participants!