This section is tricky. It certainly confuses me, although I assure you there are interesting bits in here. You might find that the posting Response to a Reddit Comment works the ideas through in a more practical way, before looking at this page - or just skip on to Neuroscience Questions. They’re interesting questions…

Still here? OK - here goes…

Juxtapositionally thinking engineers sometimes find themselves at a logical impasse when discussing complex dependencies, structurally within large projects, and in planning risk exposure. Each part of the reasoning they are presented with is correct in isolation, but the conclusion is obviously exactly wrong. For some reason, people trapped in focussed attention all agree on the wrong result. Even if people were making errors because they are looking through a small “window”, they wouldn’t all make the same errors.

Here I’ll describe a type of logical error, which people too biased towards focussed attention will all be liable to make in the same way, with the property that everything is just fine locally, within each instance.

Using Symbols So Much We Forget Them

The problem comes from the way we get the elements of the problem we wish to think about. Logical thought is confined to an internal imaginary world, populated with entities that we relate and deduce things about. How do we know what to populate our imaginary world with? The cast of characters we’re going to think about either present themselves to us spontaneously while we are thinking juxtapositionally, like the terminating “shoe”, “flake” and “drop” suggest the preceding “snow” in the example above, or they are explicit givens that someone else supplies. Just as unfamiliarity with juxtapositional thinking can make us disregard internal consistency, so it can make us forget that the givens are not the reality. The reference can become the referant, the paperwork the reality, the map the territory.

I once saw someone who thought that he’d compiled an exhaustive list of everything that could possibly go wrong with a high availability system. Each scenario had a brief summary, describing how data loss would be avoided! Obviously forgetting that there are things we don’t know is a good source of errors, particularly when getting a clear idea of users’ requirements which are often not thought through very well. The effect can even be subtle and pernicious, causing firms to be wrong-footed when irrelevant upstart competitors take their markets. But there’s an effect that goes beyond getting the right cast of characters, and influences how we see the relationships between them.

To demonstrate this, consider the famous Monty Hall Problem. A gameshow contestant is presented with three doors. One conceals a (desirable) car, the other two conceal (undesirable) goats. The contestant picks any of the doors, and then the host opens one of the other two to reveal a goat. The contestant now has the opportunity to stick with the their original choice, or switch to the other unopened one. Should the contestant stick, switch, or does it make no difference?

Most people think it makes no difference, and there’s been lots of controversy about it. But in fact they are twice as likely to win the car if they switch. This is because at first, they are choosing at random from three doors, but when the host has eliminated a goat the remaining door is drawn from a more “refined” population. By switching the contestant moves to a pool of one car and one goat. See the linked article for much discussion. You can also check it with a program.

I argue that what confuses people is their habituation to thinking of the first statement of the problem (when they pick the first door) as the situation, rather than a partial representation of an external reality. So when the host reveals more information by eliminating a goat, they don’t think of the reality in their head as having changed, because they haven’t moved any symbols around. Therefore, they reason, the situation cannot have changed. People have exactly the same problem, of seeing how more information can change the predictions, when they’re arguing about Bayesian statistics.

Excess System Boundaries Conceal Commonality

Here’s another way to think about pulling things into an internal context and making then errors about relationships. In Laws of Form Spencer-Brown describes the primary arithmetic, which can be used to do logic by drawing circles next to and round each other. The circles are actually denoted by right-angle symbols, and the rules are summarized in a Wikipedia article here. How to do logic is described here. The question is, when we drag representation of something across the boundary into our inner world, does it cross a distinction as Spencer-Brown calls his circles? Does this mean that a subsystem, imported as a continent and distinguished whole, has a hidden distinction drawn around it so that it is itself, and not some bits of other entities? In the primary arithmetic the additional distinctions serve as negations. If we have hidden distinctions drawn around things which are artifacts of their importation, then perhaps their presence fouls up our application of logic. Here’s a possible consequence of this peculiar idea:

The basic logical operations that we use in our reasoning are NOT, OR, AND. That’s how we frame and relate our choices in planning meetings as well as in code. Oddly enough, foolish Nature doesn’t realize that AND is a basic logical operation, and incorrectly supplies NAND as more primitive - implementable with fewer layers silicon as discussed by Wikepedia here. We talk about NAND as if it is a negated AND. That is, in the primary algebra we talk about it as if it’s like the term on the left, when in fact it’s what the primary algebra says it is - the term on the right! This implies that if we want to reason efficiently (at least with respect to the natural world) in our discourse, we should be constructing sentences that don’t contain a AND b, but instead a different relationship, meaning NOT a OR NOT b. The first kind of sentence lists the special cases where the result is allowed relative to a default background disallowance, in the second kind disallowance is the special case. It’s like a Escher figure and ground drawing, where the black and white figures reverse at the boundary of the inner reasoning world.

Perhaps it’s not really necessary to worry that our logic may be littered with unnecessary negations that we have to work hard to cancel out and sometimes conspire to lead us astray, but remember Dijkstra, commenting on proofs where the authors, by directing implication instead of recognizing equivalence, increase the size of their work by factors of 2, 4 or more. Dijkstra’s insight contains strong resonances of “double trouble”.

I think it would be much safer if people were taking care to look up and smell the juxtapositional flowers from time to time, maintaining the sanity check of the kind of thinking that assumes knowledge is imperfect and does inference below the level of thought. The kind of focussed, procedural attention that is mistakenly equated with “logical” thought should be seen as suspect at the level of logical conjugations.

We can also think of false dilemmas (assuming that the enemy of my enemy is my friend, neglecting the possibility of a complete grouch who hates everyone), or the reasoning that leads to cycles of escalation, for example Prohibition or the Arms Race.

This does not mean surrendering ourselves to intuitionalism, but it does mean that we should sanity check our reasoning when we are in a frame of mind to do it juxtapositionally and vice versa. We all know how we can see more possibilities once we have slept on something. I suspect we then have the opportunity to dissolve unnecessary subsystem boundaries so that the algebra of possibilities becomes simpler and we see the structure of the situation. If one cognitive mode was enough, we probably wouldn’t have two.

Circumstantial Evidence

I wonder about a specific problem that seems to exist on every generation of Microsoft operating systems. II’m not trolling - I’ve a point to make here…)

Microsoft have always emphasized accessibility by ordinary people, while to hackers the interface - and what lies below it - are frustrating. I remember once I was introducing someone to the UNIX command like, and showed him a simple pipeline. He boggled at the glyphs and cried, “How on earth am I expected to remember something like that!”

For a person that’s used to imagining all the bits of a pipeline at once because they have enough working memory, it’s very easy to fiddle with the bits and make it up. They don’t remember it. Well, unless it’s cpio -ivucldmB which is a rhythm baby. To them that’s easy, but without that easy structural crib, the various clickings with a GUI are random madness.

What the Microsoft GUIs do so well is providing quite a lot of computational functionality to people who can’t rely on juxtapositional thinking because of their background social stress levels. So it really is an effective, marketing led, organization catering to focussed attention. The trouble is the focussed attention values propagate through social stress, so the logical effects of using focussed attention without its juxtapositional context end up strongly represented in the design culture.

When things are installed, there is always, always, some entry which has to be made in something else to assert that the first thing exists. It needs a symbol, something must be done. In the UNIX or database worlds, that would be called a denormalization - storing data in more than one place, at the risk of damaging referential integrity. Instead, anything which wanted to see if something is installed just looks where it would be. There is none of that, “There’s no use your pointing at it, the computer says it isn’t there!” business.

And we see exactly that kind of problem, only on Microsoft platforms, where we can’t uninstall because it’s not installed but we can’t install because it’s already installed. Normalized UNIX systems just don’t do this. Can it be that the subtle extra level of symbolization has crept into all the thinking within a corporate culture, working from the outside, commercial environment, inwards to create unintended reflections of an unnecessary complexity within the engineering culture?

The idea doesn’t explain every wrong-way-round mistake mind you. Conventional current flow, where the arrows on circuit diagrams have the electrons going round the wrong way, is just bad luck because people guessed wrong before the particles were discovered or materials got smart enough to tell the difference. But we know that CCF is wrong, and we mustn’t use it when trying to understand what happens inside silicon chips, or we will surely go mad. The reluctance to draw diagrams sensibly now that we know better because “We’ve always done it wrong”, might well be laid at the door of social stress addiction however.

Those who enjoy this kind of thing might also enjoy, To Dissect a Mockingbird.

Next: Neuroscience Questions