Embedded Figures Test - Tracking Confirmed

I’ve received a very encouraging email from an academic psychologist who is kind enough to have patience with my unusual thread of research, confirming that the distinct signature that has emerged in the embedded figures test results is also found in other tests of cognitive flexibility, which fulfills the strong requirement described in my previous posting Embedded Figures Test - Distinct Signature Emerges. Following the etiquette of personal correspondence I’ll just quote the meat of it:

That’s really interesting! We are actually resubmitting a paper finding the exact same best performers/worst performers dichotomy with stress in a set of college students. Our studies have focused on young volunteers, generally. Also, I like the spatial aspect of your task. Whenever we tried spatial tasks, I suspected our lack of findings are due to power issues. Your evidence supports this. The one obvious weakness is, as you point out, the lack of control of testing conditions, but so long as these do not differ systematically in some way, then you can overcome it with statistical power.

That’s very encouraging! I don’t suppose there’s a preprint on arXiv or similar is there?…

Not yet- but remind me and I’ll be glad to send one once I know I have a final version!

I shall certainly be looking out for the preprint, and will post details as soon as possible!

So not only does the EFT track other tests of cognitive flexibility as modulated by stress, so long as it is applied to big enough groups it also reproduces the different responses seen in different levels of performance. It also predicts a similar signature where the subjects’ age modulates the score but amongst the worst performing third, while stress affects the best performing third. It looks like I’m actually ahead of the game on this one!

A useful exercise now will be to go and do some stats on the raw data and try to make a statement about how big is big enough. How many people need to be in a work group before we can apply the EFT, take the best performing third, and predict that they are under occupational stress if their mean score is > 2.5s?

At this point it would be worth saying something about why I selected the EFT as a potential test of cognitive flexibility that would be repeatable per individual and automatable. This will put some of the important ideas I’m keen to convey in this blog into a concrete form.

The test is normally thought of as measuring a quality called field (in)dependence. The idea is that some people tend to see the parts and so can easily spot the one part they are looking for, while others see the whole and so find it hard to spot a specific figure. This idea then extends by some sort of loose analogy to poor scorers being “sociable” because they are somehow more conscious of the group of humans around them, better scorers being “misfits” - and furthermore to better scorers being more able at natural sciences because they are natural reductionists. In the semi-scientific world of recruitment psychological testing, better scores are supposed to identify misfits who paradoxically have good leadership potential.

I first encountered the test in the recruitment context, and it seemed to me that when I did it, I looked at the whole and allowed the part I wanted to pop out at me. It was a test of my visual/spacial parallel processing capacity, which is strongly related to the capacity for juxtaposition of complex elements which I emphasise as important in complex design work, and hence cognitive flexibility. As to all the literature discussing field (in)dependence, I didn’t believe a word of it, and the proof is now available. If the quality of field (in)dependence was a personal trait as believed, then scores would be constant per individual, and not modulated by occupational stress as I have shown they are.

So why are poorer scorers thought of (and indeed observed to be) more social, and better scorers observed to be better at natural sciences? I believe the answer lies in the background psychosocial stress maintained in most cultures, which has a neurochemically addictive component. Many people are (without realizing it) hooked on performing low level stressful social exchanges, monitoring each others’ compliance to arbitrary social mores, worrying about it, and so maintaining a “stress economy”. People who participate in this are seen to be more social because non-participants are unconsciously resented for not exchanging “stress hits” with others. Unfortunately participation in the stress economy renders them less able to do complex work, and in extreme cases can lead to perverse group behaviour. Groups suffering from this effect are not aware of it, because the changes to brain chemistry are similar to those found in (for example) cocaine addiction - which is known to produce unawareness of problems and unjustified feelings of complacency. Perhaps the poorest performing third who are not made any worse by occupational stress are already weakened in terms of cognitive flexibility because the stress they don’t get at work, they find somewhere else! When it comes to natural sciences, the belief that reductionism is all is mainly found amongst people who don’t do them! In reality nature is doing everything at once, and the skill which psychologist Simon Baron-Cohen calls systematizing involves seeing the key relationships in multiple phenomena occurring in juxtaposition.

The important idea then, to take away from all of this, is that being good at complex work is neither reductionistic nor a property fixed at birth. It’s something anyone can get better at, by learning how to reduce their psychosocial stress to the point where they can use the full faculties they were born with. When it comes to groups, the amount of stress (bad) and the quality of debate (good) within a group is amplified by re-reflection within the group. Initiatives to reduce background psychosocial stress such as I describe in the Introduction can bring runaway benefits to a groups even as they make only incremental benefits to each member of the group. Beware though of The Dreaded Jungian Backlash - the buildup of unconscious resentment directed against whole groups who drop out of the stress economy. The best counter to this effect is to make everyone involved - particularly the senior management of work organizations - explicitly conscious of the effect. The Backlash only presents a danger when people do not understand the unconscious origins of such resentment directed against teams enjoying improved performance, and which (as I have described) can involve clerical and janitorial workers who are in no way competing with the improving team.

Neuroscience, All Postings, Stress Addiction

Embedded Figures Test - Distinct Signature Emerges

This is an update to a previous post reporting some very interesting results from the Embedded Figures Test, now that nearly 500 results are in. A specific signature has emerged, which (if it remains under more controlled conditions) would enable a clear comparison with existing tests of cognitive flexibility.

To recap: The Neuroscience page describes lab tests which show how cognitive flexibility is adversely affected by even slight stress, and the Implications for Software Engineers page discusses the need for exactly this kind of flexibility if we are to juxtapose multiple considerations in our minds and be good at programming - or any other kind of creative work.

The problems with most tests of cognitive flexibility are that they are costly to administer and they are not repeatable per subject. Once a subject has seen a given test it won’t be a puzzle again, so we can’t apply it once, alter the conditions and try it again. To apply this stuff in industrial contexts, some way to objectively identify high value initiatives quickly and efficiently would be really good.

The EFT resembles some known tests of cognitive flexibility, it is repeatable per individual and can be automated. So if it does vary with stress then it could be very useful. Of course, the experiment on this blog is only indicative - because the self-assessment of stress is quite simplistic and the test conditions are not closely controlled it can’t produce definitive answers. For one thing, if the mouse and screen that people are using are very different, that could easily distort the results. But if it looks promising, it might give professional psychologists an incentive to look at it in more controlled conditions. Hopefully the large number of results now collected will average out things like the responsiveness of mice.

Within these limits, the results so far look quite promising - and a specific signature has emerged. As before, you can download the raw data eft11oct2009.txt and the analysis program: EftAnalyzer.py. On my Mac at least, I simply downloaded and installed wxPython, and the program ran from the command line. Then use File/Open… to select the data file. The graphs here are screenshots from MacOs X. You can also grab the data, program, and copies of all 24 graphs I’m currently drawing in one download: eft11oct2009-all.tar.gz.

There are 499 contributions in the database. 438 of those pass validation as sensible. Comparing with the numbers in the previous post it’s striking how the larger sample has scaled in proportion. It’s still a sample strongly biased towards male geeks - 391 males and 47 females, 270 geekly occupations. While there’s no evidence that people who feel nauseous when bored have EFT results which are different to those who don’t, I’m amazed that 56% of respondents experience the nausea effect! This may be something worth looking into further. (I asked about it because nausea is a side effect reported by people taking dopamine raising drugs for Parkinson’s disease, dopamine is raised by stress, and I suspect that groups become habituated to raising dopamine by subjecting themselves to miserable, boring meetings.) Here are the numbers:

Most respondents scored between 2000 and 3999 milliseconds per figure, but there is a “long tail”:

There’s also a good spread of self-assessed occupational stress, slightly biased towards the unstressed side of the survey questions. (The positive or negative wording of the questions is mixed to discourage people from just clicking down a vertical line. A strong unstressed response gets 2 “chill points”, and strong stressed response gets -2 “chill points”. Weaker responses get 1 or -1 “chill points”.):

It would be nice to have a wider spread of respondent age, but there are many in their 20s, 30s and 40s, over 20 in their 50s and a few in their 60s, so the data does represent a good spread:

An important graph in this analysis plots the average age of people in each band of self-reported stress. There’s no correlation at all, which is important because later we’ll see correlations between age and results, and stress and results. If (say) older people were reporting more stress, then we couldn’t talk about both effects - either age or stress might be the important factor:

The relationship between age and score on the test is very interesting. Overall, there is a small correlation with time taken to see the figures tending to increase as the respondents’ age increases:

The surprise comes when we look in more detail at the best, central and worst scores within each age band. The best performing third take the same time whatever their age. If anything, the best performers actually get a bit better at it as they age (although there are few data points in the older groups, and we might imagine that senior citizens who are reading blogs and doing cognitive tests are the kind of people who keep themselves alert):

Age makes no difference at all to the performance of the centrally performing group in each age range (there are fewer data points because the first remainder after dividing each age band into three is assigned to the “best”, the second remainder is assigned to the “worst”, and so the “central” group ends up the smallest):

Now compare with the worst performing third. The worst performers show a clear increase in time taken to see the figures as they age, which is not found in the best and central performers. This is part of what I’m calling the “signature” - if existing tests of cognitive flexibility show a similar drop in performance with age amongst the worst performing third only, then thats a good argument that the simple test tracks the more complex and less repeatable ones:

A similar effect occurs in the variation of response time with self-reported stress - and this is the meat of the experiment. An important difference is that with age, the worst performers get much worse with age, but with stress it’s the best performers who perform less well under stress:

While the central third spread out at around the same point (-5 Chill Points or mild stress) but also spread out if they report a very unstressful environment (10 Chill Points or more):

Meanwhile the worst performing third in each stress band is all over the place. Perhaps some people are just really bad at this, and the amount of stress they are under doesn’t make any difference at all:

So this is the other half of the “signature”. The test shows a reduction in performance amongst the best performing third under stress. Expressed predictively, if we were to administer the test to a work group, and look at the numbers associated with the best performing third, we could predict that they are under occupational stress if they were mostly over 2.5 seconds.There are a couple of other interesting graphs. A few respondents reported scores before and after stress reducing exercises, and indeed the scores do seem to be improved - except the person who did 8 stress reduction exercises, which perhaps is quite stressful in itself!

There was a surprise (at least to me) in the results for people who reported use of psychotropic drugs. Nothing makes much difference except the peak for alcohol users, which is shifted to the right compared with everyone else, and the frequency distribution above. There’s also a bit of a rightward shift for marijuana users, but it isn’t as pronounced. Can it really be that alcohol use has a general effect on a person’s cognitive performance, irrespective of whether they could pass a breathalyzer test at that precise moment?

So the next thing to do is see if I can interest professional psychologists in the EFT as a repeatable and low cost alternative to existing tests of cognitive flexibility. If the EFT signature of the worst performing third getting worse with age, and the best performing third getting worse with stress is also found in existing tests, then it might be quite easy to move this kind of work into an industrial, field context in a quantifiable way, and then start looking at things like fault report counts and schedule reliability as EFT results change.

Neuroscience, All Postings

The Benefits of Low Level Lighting

Few people choose to fill their homes with bright fluorescent lighting. Perhaps in the kitchen or utility room where bright lighting is helpful for the kinds of things they do in those rooms, but not in the living room, study or bedroom, where they wish to feel comfortable, relaxed and secure.

On the pages Neuroscience and Implications for Software Engineers I describe how feeling comfortable, relaxed and secure - rather than stressed and anxious - enables us to use the faculties we need to be good at programming.

So logically, we should light programming shops in the same way that we light our studies at home. I believe that commercial organizations drift into doing exactly the wrong thing because raising stress and anxiety has an insidious and previously unsuspected allure. The neurochemical response to stress - raised dopamine and norepinephrine - is similar to the response to addictive drugs!

I’ve recently been working in a shop that’s good in several ways, and one of their good practices is low level lighting in several areas where programmers have their desks. Some of the old-timers have told me that the lighting is an accident of history, but I don’t care how it got that way. It’s still worth sharing!

It’s a bit tricky sharing a lighting level - darned modern cameras :-) So in the two photos below, notice how strong the illumination seems where the lighting shines directly on the walls. And remember I took them to make a point, not as works of art. So they’re “warts and all”, and therefore I’ve taken care to avoid catching anything identifiable in them. The areas used for greeting customers are much smarter:

Compare with these two shots of the kitchen area, where natural daylight can be seen coming through the windows:

The low level lighting brings several interacting benefits. The sense of cosiness, similar to my living room at home, is certainly there. Beyond that, there’s an improved sense of privacy. This is more of a perceptual illusion than a real effect, because we are all as visible to each other as in any open plan environment. But it feels more private, and that’s what matters for reducing stress and anxiety and so enabling juxtapositional thinking.

There’s also an effect similar to being in a library. People respect the sense of quiet and cosiness. They keep their voices down, and tend to go elsewhere to use their phones. There’s a nice kitchen they can go in for a chat. Not all noises are the same, and we know that noise containing intelligible data (even if it’s not intelligent) is the worst kind for breaking our concentration and preventing us seeing complex pictures. So if we must suffer open plan, anything which encourages people to keep their voices down is good. The office depicted in the photos above has a babble annoyance level which is nearly as good as a two person office. It’s remarkable. I would never have guessed this effect would be so pronounced if I hadn’t had the chance to observe it happening.

Then there’s eye strain. We all spend our days looking at screens, which emit their own light. They don’t need external illumination like pieces of paper do. In fact, the more environmental light there is, the harder we find it to see the screens. The blatant extreme case of that is glare when the angles are wrong, but flicker has a more serious effect on its victims, wearing them down over time. The flicker from the kind of cheapskate tubes found in many commercial shops is bad enough, but when it beats with displays which are refreshing at nearly the same frequency it can be (almost) visible to most people. There are plenty of people who think that computer displays emit noxious rays which give them headaches when actually it’s that beat frequency flickering away at the edge of their awareness that’s doing them in. Simply explain what’s happening to them and they stop being so annoyed by it - try it! Obviously, the less fluorescent light there is in an office, the less these effects will occur.

The whole flicker thing has another side to it. Many of the best programmers are the sort of people who are classified as being on the “autistic spectrum”. Now I’m very suspicious about such classifications. As I explain on the page Other Applications, I believe we can understand the name-calling as unconscious resentment by those trapped in the psychosocial stress/dopamine economy towards that don’t participate and exchange fixes. The greater sensitivity of the non-participants can be understood as normal sensitivity, not blunted by excess dopamine constantly sloshing around their brains. But whatever the cause, many of the best programmers can see the 50 or 60 Hz flicker of fluorescent lighting without any monitors beating away nearly in sync, and it’s extremely annoying to them. Reducing it improves their ability to concentrate. Simple.

The final benefit I’ll identify is economic. Many organizations operate acres of floor space, all lit with tubes. Take half of the tubes away and the lighting bill is cut in half.

I’ve given several good reasons for reducing the light levels of programming shops. Some readers may notice there’s an argument I’ve not made. There’s plenty of anecdotal accounts of full spectrum lighting improving the performance of schoolchildren. The trouble with this stuff is that it is very anecdotal - the more I look for substantial original research on this, the vaguer and more associated with interested parties it becomes. The remaining research doesn’t do a good job of excluding multiple sources of variance, or “confounds” as they’re called in the jargon. The report Full-spectrum fluorescent lighting: a review of its effects on physiology and health by McColl and Veitch, funded by the National Research Council of Canada does a good job of exploring the lack of solid evidence in this area. So although you might choose to experiment with full spectrum lighting, bear in mind that it is more expensive, and the evidence for its benefits is not yet strong.

Experimenting with reduced light levels is a different matter. It has a negative cost. So try it. Persuade your boss to take away some of the tubes (don’t make things too dark - make sure illumination is enough to be safe for example), and see if your working environment becomes cosier, more private, quieter and more relaxing. See if your ability to concentrate improves, and over a release cycle see if that has any effect on the accuracy of your estimates, the necessary sufficiency of your code, and your bug reports.

Neuroscience, All Postings, ADHD, Stress Addiction, Programming

Process Degradation and Ontological Drift

In the Introduction I describe how the cognitive effects of stress - particularly as studied in neuroscience labs and military contexts - degrade the very faculties we need to be good at programming. I also propose that since the neurochemical effects of stress - particularly the release of dopamine and norepenephrine - are so similar to the effects of addictive drugs, we might expect whole groups to become addicted to maintaining a locally comfortable level of psychosocial stress. Because the members of the group don’t realize they are stuck in an addictive loop and are experiencing both cognitive impairment and gratification of their addictive need, they have little chance of gaining insight and instead will tend to rationalize their stress-raising behaviours as inevitable or desirable, but won’t be able to say why.

No part of this is terribly radical, but taken together it certainly is. For example, it implies that to really understand what is going on in a workaday engineering situation, we have to look outside the context and think about the whole culture in an unusual way. Like the old puzzle about connecting 9 dots with 4 connected and unbroken lines:

We have to “think outside the box” - but we have to really expand our contextual awareness and not just say we are doing so. The trouble is, we need access to the very faculties that stress takes away to be able to think juxtapositionally and tell the difference!

So in this post I’m going to look at a real programming task, and talk about how people often think about such tasks. I’ll look at the opportunities to do a good and interesting job that we miss all too often, how our perception of work can change to become a mockery of itself and how process can amplify this change. I’ll also discuss how this change distorts our appreciation of our tools, and touch (just a little) on the profound social implications outside engineering work.

The Task

I recently had to apply a bug fix across a large codebase (around 1.1 MLOCS in the relevant part) implementing some complex business logic for several large telecoms customers. Some colleagues had successfully tracked down a problem which cropped up rarely, during an even rarer failover event in an underlying Sybase database. (A failover happens when one copy of a database becomes unavailable for any reason, so Sybase transparently switches to a hot backup.) Like so many others, the problem was rooted in the object-relational impedance mismatch. To summarize, the object and relational models don’t play well together. In this case we had some classes to handle the database operations and nicely encapsulate their state. The biggest lump of state was the result sets that some operations could return. Errors were handled by C++ exceptions, but when an exception was thrown, it could leave a result set waiting in the database connection encapsulated by the classes. It was the waiting result sets which blocked the failovers and prevented them performing as expected. The solution was to make sure that every operation - and every catch - purged any result set that might be waiting. Once understood, the fix was easily stated. The problem simply came from the number of cases which had to be considered.

Fortunately the code was well-layered. Database operations were rigourously contained within a specific set of classes, clearly separated from higher level business logic. Even better, they were all wrapped in SQLAPI++, which provides a (somewhat) platform independent collection of classes to wrap database operations. So every possible cause of problems could be found by looking for uses of the SQLAPI++ method SACommand::Execute(). So as we first understood the fix, I had to make sure that each such was embedded in a pattern:


         // Do stuff
catch(SAException &e)
         // Do stuff

Of course, it was important to get it right in every case. The edit would involve checking many cases, each with their own variations, spread over (I later determined) 133 different source files. This must be done without introducing more errors, including source code corruption caused by simple finger trouble while doing such a large task!

Perceiving The Task

Just as I was getting interested in the challenge of this source manipulation problem, someone made a comment recognizing that it was boring. For a moment I was amazed - it really hadn’t occurred to me to see it as a boring problem rather than a difficult one! I realized that the atypical perception was on my side, and thinking that observation through led to some interesting realizations.

Firstly, I’ve been doing this kind of thing for about 30 years now. I know that there’s no magic fix. No matter what technologies we use to help us work at convenient levels of abstraction and sem-automate the tasks, if we are going to own and maintain large and sophisticated programs, we will always come up against this kind of problem. Dealing with them effectively is an intellectual challenge for grownups.

Secondly, I’ve never been seduced by the rationalization of laziness that makes people speak of “mere code”. This is the idea that highly intelligent people are supposed to occupy themselves with grander things, and find “mere code” to be beneath them. I’ve always been suspicious of this. Code is the product. It doesn’t matter if your code undergoes mechanical translation, perhaps from some IDL or something to C, thence to assembly language and finally to opcodes. In that case the IDL is your product. Dismissing it as “mere code” and wandering off to do something else is like a cook who sneers at “mere food” and prefers to criticize the decor of the kitchen instead. It’s thoroughly misguided. Coping with large codebases requires skill, experience and a challenging mixture of rigour and ability to hold temporary partial hypotheses - and keep in mind which is which! It’s difficult, learned on the job, and cannot be performed in knee-jerk fashion after skimming “Codebases for Dummies” in a lunchbreak. The simple fact is, the presence of at least a proportion of people who have mastered these skills is necessary to the success of shops doing real, grownup work.

Thirdly, I’m well aware of the danger of boredom, and recognize it as one of the things which must be managed in order to do a good job - not just accepted as an unavoidable misery of life, leading to certain and inevitable failure. Effective attention can be maintained in two regimes, with a danger area in the middle:

It’s easy to maintain careful attention on an interesting task, and it’s also easy to perform a strictly mechanical task (like lawn mowing or ironing) particularly with an mp3 player available. It’s the tedious and repetitive tasks which require some attention, such that we cannot disengage, yet neither is there anything to keep us engaged, which are the problem. So faced with a task which initially looks like it sits up on the peak of the boredom curve, I know that the way to do it well is to decompose it into two tasks which are on the ends. That means investigating and thinking about it until I can structure the remaining parts in a very mechanical way. Then I can pick some matching music and get a rhythm going with my fingers!

Performing The Task

It was pretty easy to find all instances of string Execute in the source tree and examine them. I quickly found there were more instances of Execute than just calls to the method of class SACommand. On the other hand, my exhaustive examination of all Executes quickly showed that all the ones that mattered declared an SACommand in the same file. (I admit I’m simplifying here to keep the posting on topic - I have a point to make.)

It was then very easy to constrain my scans to the relevant files, by using a little shell one-liner:

$ vi `find . -print | while read fname
> do
> if test -f "$fname"
> then
> if grep SACommand "$fname | grep Execute > /dev/null
> then
> echo "$fname"
> fi
> fi
> done`

If you don’t know shell, that says:

Edit a list of files comprising:
List all files below current working directory. For each file:
If it’s a regular file (not a directory or anything else):
If the file contains SACommand and Execute:
Include it.

The thing is, if we are willing to use little shell one-liners like that, we can very easily customize searches of any complexity we need, as we need them. That’s why many of us stick with the “old fashioned” command line tools, even in this era of eye-candy encrusted Integrated Development Environments. Those tools can do some very clever things with a click of the mouse (and tools like Eclipse are very clever). But they are limited to the functionality imagined by their designers, rather than what the user needs at any moment. The teaching that A Jedi needs only his light saber contains a deep truth, even though it has come in for some ridicule in recent times:

(The two twerps in that video sound like they come from near my own home town. It’s not Nottingham, but maybe Mansfield. English English accents really are that fine-grained.)

You probably see where I’m going with this: I reckon surrendering to the inevitability of boredom (and so making errors) is part of a pattern which includes becoming dependent on (and so constrained by) the functionality available in prerolled tools. It’s about lowered aspirations, increased passivity and reactiveness. These perceptual and behavioural shifts are in keeping with the effects of stress, as I discuss on the page Neuroscience.

While I was writing this posting, I had an idea and tried a small experiment. I used the same one-liner, on the same codebase, but on Apple’s MacOs X instead of Sun’s Solaris. It gagged, complaining:

VIM - Vi IMproved 6.2 (2003 Jun 1, compiled Mar  1 2006 20:29:11)
Too many edit arguments: "-"
More info with: "vim -h"

I guess that’s why Sun can still sell stuff to grownups!

By the time I’d finished my exhaustive scan I’d divided all use cases into simple cases as above, and a couple of variant patterns where it was appropriate to put the result set clearing snippet elsewhere in an inheritance hierarchy containing the Execute. I got my corrected snippet ready on the clipboard (I was using a Windows XP box running PuTTY as a terminal), and I thought I was ready…


I quickly discovered a problem. There were some formatting problems which had crept into the codebase over time. The indentation wasn’t very rigourous, and even more annoying, there was a mixture of spaces and tabs used to create the indentation. This didn’t matter when editing a few lines, but the unpredictable editor behaviour when scaling to many lines pushed the job back into the error-prone bit of the boredom curve. I had to delay my edit and prepare the source files first.

That was a two stage process. First I fed all the relevant files through bcpp, the classic C++ beautifier. (Another little shell one-liner accomplished that.) Then I went through all the files to neaten them up. The beautifier is good, but it can’t make aesthetical judgements about stuff that should be lined up to improve readability - like this:

buffer << "Name: "    << name
       << " Quest: "  << quest
       << " Colour: " << colour;

Once those problems were out of the way the noise level was lowered, and I was able to crystalize my awareness of another problem that my subconscious had been screeching about, but I didn’t know what it was.

If the Execute() / isResultSet() / FetchNext() sequence threw an SAException, I still had to purge the result set. But doing that involved further calls to isResultSet() and FetchNext(), which could themselves throw SAExceptions - and probably would if any part of the database connection was confused! It was important to surround the purge within the catch{} in it’s own try{} block and catch{} these SAExceptions too, or the uncaught exception could cause the server to crash ungracefully!

This demonstrates the importance of reducing the noise level to the point where we can hear the voice of our own experience - and why Paul Graham says most commercial programmers never reach the state of awareness where they can do good work.

So far the task had taken about 5 working days - and I hadn’t made a single keystroke that actually implemented the edit I needed to perform. But then I did the edit, as an uninterrupted mechanical process which took less than a day - and I got it right.

The Code Review

How do I know I got it right? After all, the problem is usually not so much being correct, as it is being able to prove we are correct. In this case the biggest potential source of error was the manual straightening up process after the automated beautification. There were plenty of opportunities for mis-edits - unintended modifications - in there. Usually the CVS diff command will find the edits, but in this case the beautification made the diff too big to handle.

Fortunately I’d pulled a second copy of the source tree before I started. (CVS diff compares with the version the local copies may have diverged from, but pulling the a separate copy of the right version of everything would be tricky.) The CVS diff could at least give me a list of the files I’d touched, if I used grep to find all the lines that started with the string RCS, and edited them to produce a simple list of files.

I reckoned it would be much easier if I could consolidate all the files into a single pair of directories - before and after. That would be possible if the file names were all different (ignoring the directory names). I could check that easily by using each line of the file list as an argument of basename and doing a unique sort of the list:

$ cat CvsDiffFileList.txt | while read fname
> do
> basename "$fname"
> done > BaseNames.txt
$ wc -l BaseNames.txt
$ sort -u BaseNames.txt > UniqueNames.txt
$ wc -l UniqueNames.txt

Oh dear. There was a name collision. Let’s find out about it:

$ diff BaseNames.txt UniqueNames.txt
< main.cpp
$ grep main CvsDiffFileList.txt

OK. The name collisions were just mains for running test stubs. I didn’t care about them, so I deleted them from the file list, and knew I could copy the before and after versions of all the touched files into a pair of directories. So I did that with another little one-liner:

$ mkdir Before
$ mkdir After
$ cat CvsDiffFileList.txt | while read fname
> do
> cp src/"$fname" Before
> cp src/"$fname" After
> done

Then I removed all the tabs and spaces from both sets of copies. That removed any differences caused by changing tabs to spaces, the indentation, or differences in if(x) compared to if ( x ) style, so the remaining differences would be my substantial edits or finger trouble:

$ cd StrippedBefore
$ for fname in *
> do
> cat "$fname" | tr -d " " | tr -d "t" > fred
> mv fred "$fname"
> done

The result was 15,000 lines of diff - much better than the 100,000 I started with, and mainly consisting of blank lines and easily recognizable patterns of intentional edits. I then looked through the diff to make sure it was possible, checking for unmatched business logic type lines, before asking two colleagues to do likewise. A much simpler process also enabled us to review the intentional changes by looking for the Executes, like I had done originally.

Process Degradation

There’s a pattern to what I did, and we’ve all learned about it ad nauseum. It’s W. Edwards Deming’s Plan-Do-Check-Act Cycle. However, there’s a major difference between what I just described and the way “process” is normally represented. The difference is ownership.

I believe in the Deming Cycle. It works. Through reflection we become aware of what we are trying to achieve. We ask how we can do this as reliably and efficiently as possible. We form a plan, and when we do what the plan indicates we ask if it’s working or not, and correct as necessary. We find a way to check our work as an integral part of the cycle (like I pulled that second copy before I started), and when we’ve finished we act on what we’ve learned. In this case I realized the whole process formed a wonderful model of something deeper that I wanted to discuss.

Unfortunately the Deming Cycle is rarely applied as I did in the tale above. There, I was responsible for each step, I corrected my process as necessary, and I even discovered that I had to recurse, performing a sub-process within my outer Do stage.

Imagine how such a process would degrade under the cognitive distortions due to stress, described on the page Implications for Software Engineers. People under stress:

  • Screen out peripheral stimuli.
  • Make decisions based on heuristics.
  • Suffer from performance rigidity or narrow thinking.
  • Lose their ability to analyze complicated situations and manipulate information.

I was analyzing (reasonably) complicated situations, and using my shell like a kind of mental exoskeleton to manipulate information all the time! If I couldn’t do that, how could I know what to do? I’d have to become a robot myself, and execute a program that I’d got from somewhere else. If that’s not making decisions based on heuristics I don’t know what is! Of course, the motions I’d be going through wouldn’t be very effective, but I wouldn’t be able to compare my performance with a gold standard that I might imagine for myself based on the question, “How do I know?”, because of my narrow thinking. I wouldn’t even notice my mistakes, because I’d be screening out peripheral stimuli.

I’d be lost. Instead of being able to rely on my own capacity to operate a Deming Cycle and move forward with confidence, I’d be “piggy in the middle” - responsible for my failures, yet ill-equipped to avoid them. It would all be very stressful. I would obviously be keen to embrace a culture which says that results don’t matter, only compliance. Then when the failure occurred, I could pass the buck. Of course, I would always have to be on the lookout for others trying to pass their bucks to me, and I would have to be clever in my evasiveness and avoidance of responsibility.

The processes I could use would not be ones I cooked up myself, always following the meta-process of the Deming Cycle. Instead they would be behavioural prescriptions collected into shelfware, and micropoliced in a petty and point-missing stressful fashion, as my non-compliances were smelled out. All the stress would lock me into a cognitively reduced state, and if I got hooked on the stress chemistry, the raised dopamine and norepinephrine, the whole business would start to develop a self-evident rightness. If I suffered this decline in the company of others, we’d develop language and a local mythology to celebrate our dysfunctional state!

All this, of course, is exactly what we see as Quality programs degrade into reactive pettiness and individuals lose their problem solving abilities. Deming’s big thing was devolving process improvement to the people nearest to the work, but in the degraded version process is handed down from the organization. We have an entire belief system predicated on the belief that:

The quality of a system is highly influenced by the quality of the process used to acquire, develop, and maintain it.

Rather than the skills, imagination and dedication of the people applying it!

So we must always be very careful of process. When applied with knowledge and understanding, we get the growth of awareness, efficiency and personal fulfillment that some practitioners of the Deming method report. But with a supervening cultural tendency to stress addiction always in play, it can easily degrade into the exact opposite of what Deming intended. A spiral of decline where individuals become progressively less capable and empowered while the process becomes more externalized and disconnected from the work at hand.

Ontological Drift

After a while, groups suffering from a spiral of psychosocial stress start to behave in very peculiar ways. Being unable to trust their own good senses to even select a useful script, but unable to proceed without one, they resort to grabbing any old script and following it, no matter how silly the result. The deeper they get into trouble, the worse their problems become.

This way of looking at things provides a simple explanation for some very odd things. Consider the hilarious BBC story of the man who was told to remove his shirt at Heathrow because it bore a cartoon image of a Transformer which has a stylized gun for an arm - Gun T-shirt ‘was a security risk’:

A man wearing a T-shirt depicting a cartoon character holding a gun was stopped from boarding a flight by the security at Heathrow’s Terminal 5.

Brad Jayakody, from Bayswater, central London, said he was “stumped” at the objection to his Transformers T-shirt.

Mr Jayakody said he had to change before boarding as security officers objected to the gun, held by the cartoon character.

Airport operator BAA said it was investigating the incident.

Mr Jayakody said the incident happened a few weeks ago, when he was challenged by an official during a pre-flight security check.

“He says, ‘we won’t be able to let you through because your T-shirt has got a gun on it’,” Mr Jayakody said.

“I was like, ‘What are you talking about?’.

“[The official’s] supervisor comes over and goes ’sorry we can’t let you through and you’ve a gun on your T-shirt’,” he said.

Mr Jayakody said he had to strip and change his T-shirt there before he was allowed to board his flight.

“I was just looking for someone with a bit of common sense,” he said.

That’s stress addiction induced ontological drift for you. Of course, it isn’t so funny when whole cultures go that way. “Mr. K, the committee will see you now…”

Neuroscience, All Postings, Stress Addiction, Programming

Insight, Composition and Patterns

Another very interesting paper on the neuroscience of insight has appeared. In Deconstructing Insight: EEG Correlates of Insightful Problem Solving Simone Sandkühler and Joydeep Bhattacharya monitor various brain areas with an EEG while giving subjects the usual kind of cognitive flexibility tests. They amplify the number of insight events by giving the subjects time limits and hints, and look at the EEG outputs when the subjects report insights and correct solutions.

They see the same two modes - relaxed and focussed - with insights occurring when the EEG shows states associated with relaxation:

Interestingly, we found for timeout trials that would lead to a correct solution after hint presentation, a strong alpha ERS. Increase in alpha activity is usually associated with a relaxed, less active brain, because alpha power was largest in states with eyes closed, i.e. states without focused attention.

There are two ways that this paper is an interesting addition to the work I quote on the Neuroscience page. Firstly, this paper talks about “insight”, while the others refer to “cognitive flexibility”. But the tests are the same, the faculties measured are the same. It’s a bit like speaking of breakpoints at different software layers. The call to draw a widget uses calls to draw primitives like lines and rectangles. So by citing this paper we can get to the neural correlates of insight - a term that I’m more comfortable with than cognitive flexibility, because to get good at this stuff we have to see the causes and effects right through the stack from neurotransmitters to effects on social customs.

The second thing is that it discusses various brain areas other than the prefrontal cortex. Clearly the mode shifting between focussed and juxtapositional involves the co-ordinated transition of multiple subsystems. Here’s a lovely example of such a thing - the Gibbs Aquada:

How cool is that?

Now you might say, “This is all very interesting. If I ever need a fundamental conceptual advance to maintain the backoffice system I am cursed with I’ll remember it!”

But in fact insight is not just in play in famous examples like Einstein imagining himself riding on a lightbeam, Newton’s fruit perturbed brain cells or Archimedes famous streak. To demonstrate this, consider a screenful of code. What does it do? It’s a crazy question, because a screenful of code could do many, many things. Well, unless it’s Java or calling Win32. The fanout of possible behaviours for a program as we add syntactically correct lines is much fatter than the fanout of possible chess game states as we add moves. (It doesn’t matter that most of those behaviours are not interesting - they still fatten the state space.)

Writing a program is in essence a synthetic act, where insight suggests that a certain class or data structure can represent the problem domain and make the desired operations possible. This cannot be done by random guessing or exhaustive search. The upper bound on the effectiveness of our programming activities is how readily we can be insightful. The lower bound is fixed by how well we do all the other things that distinguish grown up professionals from script kiddies, but those other things can easily provide busy work to allow us to forget when we lose sight of how to address the upper bound.

This idea of a search space which becomes unmanageable when insight is lost can explain a persistent and specific abuse of design patterns. I once had the interesting experience of being a lab rat for a disciple of Christopher Alexander, the architect who invented the idea. I was renting an old house that had recently been refurbished - in places rebuilt. It was minimal, even more so than is customary for traditional Spanish fincas. Yet it was a warm minimalism, with niches for lights that naturally threw diffuse pools where one wanted them. I kept getting flashes of the rich vegetation on the shady side of the building, along sight lines that extended all the way across the space. It had the quality without a name, and when I eventually met the landlord who had planned the rebuild, I rather diffidently mentioned software engineering and the connection with Alexander.

He roared with laughter and said, “Christopher Alexander is the reason why I do not have a large architectural practice!” He went on to explain that when he’d first moved to the area he’d built a hut and lived on the beach, as recommended by Alexander. For three years he rebuilt his hut, until he understood why the features of the traditional architecture were as they were. “Then I was ready to build in this place!”.

He’d done very well for himself, having built the houses for all the local dignitaries including the serving Spanish Foreign Minister, but his approach took the investment of his high quality time. He didn’t have anything for architectural grunt workers to do, although he ended up employing his own team of builders, because he couldn’t communicate the importance of quality in the small to contractors.

When I mentioned Alexander the landlord was happy, because he’d felt embarrassed about spying on me in order to see how I used the space. He said that he’d already noticed I’d stretched a washing line between two trees, and wondered why I’d put it there. I explained that I’d dumped the wet washing inside the kitchen door on a clean surface and then hung it in the sunshine, and the following day the construction crew turned up to install a pair of solid mountings for the line! This went on for several months. Fortunately I had read Report On Probability A at a suitably impressionable age. While the architect was watching me, I was watching the architect…

What your Shao Lin Ninja Alexandrian actually does, practicing in the original field of application, illustrates the approach that is necessary for success. Yes, there are design patterns. They are “out there”, waiting to be discovered, and some of them recur frequently. But the discovery takes years of work, finding the vocabulary of the most common patterns latent in spaces, and even then and even for an expert, further exploration and iteration is needed in each case. The conscious involvement of a devotee is necessary. This doesn’t necessarily invalidate Alexander’s early ambition of democratizing architecture (remember 4GLs?), but it does mean that the worker needs familiarity with the idea of patterns, and the cognitive state necessary for spontaneously recognizing the patterns that best simplify, chunk, make a theory of, or compress the problem domain. However egalitarian the toolkit, there is an essential act of becoming explicitly conscious of what is needed, which must be performed somewhere, in some terms, by someone.

If that cognitive state is lost, the toolkit or pattern language begins to perform a different job. Instead of a bunch of sensitivity building suggestions in a sea of subtlety, they become a massively reduced search space. The Gang of Four become a firm of stone engravers, and we get a kind of system integration:

I believe that both paths are entangled in Alexander’s 1996 OOPSLA talk, The Origins of Pattern Theory the Future of the Theory, And The Generation of a Living World. For example, at one point he says:

So there began developing, in my mind, a view of structure which, at the same time that it is objective and is about the behavior of material systems in the world, is somehow at the same time coming home more and more and more, all the time, into the person. The life that is actually in the thing is correlated in some peculiar fashion with the condition of wholeness in ourselves when we are in the presence of that thing. The comparable view, in software design, would tell you that a program which is objectively profound (elegant, efficient, effective and good as a program) would be the one which generates the most profound feeling of wholeness in an observer who looks at the code.

I am without reservation quite happy with that. It remains true even if I extend the context of the statement to include every practical observation or speculation I’ve encountered as I’ve asked why there is structure to be perceived, how we perceive it, why we sometimes can’t perceive it - and even what the ancients (who spent much less of their time in focussed attention) had to say about these things.

Yet later in the same talk Alexander says:

Now we come to the crunch. Once we have the view of wholeness and centers, linked by the fifteen deep properties, we have a general view of the type of whole which must occur as the end product of any successful design process. And because we have a view of it as a whole, we are now able to understand what kinds of overall process can generate good structure, and which cannot. This is the most significant aspect of The Nature Of Order, and of the new results I am presenting to you in this Part B.

It means that we can characterize not merely the structure of things which are well-designed, but we can characterize the path that is capable of leading to a good structure. In effect, we can specify the difference between a good path and a bad path, or between a good process and a bad process.

In terms of software, what this means is that it is possible, in principle, to say what kind of step-by-step process can produce good code, and which ones cannot. Or, more dramatically stated, we can, in principle, specify a type of process which will always generate good code.

This sounds to me like a Software Factory. An organization of what the Consciousness Studies people call zombies - “a hypothetical being that is indistinguishable from a normal human being except that it lacks conscious experience, qualia, sentience, or sapience”. The zombies provide hardware to run an Artificial Intelligence, which is stored in at most a few MBs of paper manuals. The AI is a remarkable achievement, since it requires seconds to hours to perform a single fetch or store operation using it’s shuffling hardware, yet it can miraculously transform ambiguous requirements documents into robust systems elegantly and without error. All hail the process!

I believe the Software Factory is a revealing nonsense. The comparison with the AIs we’ve not yet built using GHz processors and petabytes of store exposes the truth. The only reason anyone ever spent all those work decades on it is that sitting in the meeting rooms, listening to the endless drone of corporate doublespeak provides an excellent source of low level background stress. To get a sense of it, glaze your eyes with Obtaining the Benefits of Predictable Assembly from Certifiable Components (PACC), an offering from the Software Engineering Institute. That’s alright then - problems solved, who’s for a beer! And when they wake up a bit for the conclusion, the celebration of process fits with the stress addicted tendency to “default to heuristics”. We’re all “a tiny, tiny piece of it all”, so no-one needs to exercise the personal awareness that Alexander starts by recognizing as essential.

The idea that our thinking about pattern languages has been trying to pull in two opposing directions, trying to be useful both to people with and without access to their juxtapositional faculty at the time, leads to the question of what would happen if we tried to do one job well. Those without juxtapositional awareness at the time aren’t going to be able to make the inductive jump to insightful solutions however they are tooled up. So what if we abandon that objective and ask how we might make pattern languages more useful as cognitive aids for those who are juxtapositionally aware. Such developments would quickly seem pointless or incomprehensible to those without juxtapositional awareness.

Could that be something to do with the growing interest in functional programming? All of the form with none of the mechanizable boilerplate, and disconnected from the busy-work that addresses the usual problems caused by the boilerplate? Perhaps the net has enabled a critical mass of people interested in having such a conversation to connect, as it has already enabled groups to leverage shared, rich models instead of managerial and process overhead in open source development. If we remove the objective which contains an internal contradiction, might we find that Scheme has been our pattern language all along?

Neuroscience, All Postings, Programming

This Is Your EFT On Drugs

One more graph from the EFT results to date. There are 8 prescription and non-prescription drugs explicitly mentioned in the questions, and 8 people wrote in caffeine in the Other Drugs box. The counts look like this:

And the frequency distribution of people who use each drug looks like this:

It looks to me that most drug using groups follow the general distribution. Unlike external stressors which do seem to effect EFT scores, drugs don’t have a noticeable effect - at least not enough to be obvious with low numbers of respondents in this noisy sample.

The only anomaly is the dark blue line showing SSRIs, which is flatter than the others. Only 11 respondents reported SSRI use so this might not mean much at all.

It would be really good to get plenty of non-geek respondents to balance up the sample. Friends, relatives, postal workers, doorstep evangelists, grab them all and make the do the Embedded Figures Test :-)

The updated analysis program EftAnalyzer.cpp is available.

Neuroscience, All Postings

More EFT Results

The Embedded Figures Test page provides a test which I hope will be a good predictor of other, less repeatable or automatable tests of cognitive flexibility. It invites readers to take the test, and fill in a short questionaire. There are now 226 valid entries in the database, 26 of which are from readers who have done the test twice, with some stress-reducing exercises in between. This post examines the results to date. You can download the raw data and analysis program at the end of the post.

There are 226 valid entries out of 252. Valid entries have a Before score > 100ms. 26 of them provide Before and After values. At least 152 are geeks, as detected by looking for some geekly substrings in the Occupation field. There are quite a few blank Occupations, so the actual number of geeks could be even bigger. It’s nearly all male. So this is still a sample biased towards male geeks.

First a couple of things that haven’t worked out yet, then the more interesting stuff. Here’s the distribution of respondents across 1000ms bands:

It’s a (pointy) Poisson distribution. Maybe a stingray :-) There is no evidence of the “double hump” I’d hoped to see, which might directly reveal the presence of two distinct strategies. However, the geekly bias of the sample might mean that it doesn’t contain the second hump yet.

Next, feeling nauseous when bored. Talking to many programmers over the years, I’ve got the impression that there’s a link between naturally gifted programmers and feeling nauseous when bored such as in long, tedious meetings. So I added a question to distinguish the the people who do feel nauseous (Nauseators) and those who don’t (Non-Nauseators).

Here are the two separate frequency distributions, to the same scale. There is no evidence of any correlation between feeling nauseous when bored and EFT score. If anything, it’s the non-nauseators who are more clustered into the 2 and 3 second band:

Next, age. Advancing age has a clear influence on score. This graph plots the average score and the standard deviation in each decade-long age band:

I think this matches what we know about cognitive tests in general. If it is going to be a useful workplace tool though, for example for qualifying Path. Lab. technicians before they perform tasks requiring good pattern recognition, we need to know if the evaluation should be age adjusted - or if the deteriorating scores do indicate an absolute reduction in pattern recognition ability. Perhaps there are some jobs best done by young eyes!

Now for the relationship between external stressors and score. I took each question, and scored it -2 for “Strongly Disagree”, -1 for “Disagree”, 1 for “Agree” and 2 for “Strongly Agree”. Then I compensated for negatively worded questions by multiplying their scores by -1. There are only 7 such questions, so each respondent can score between -14 and 14 of these normalized “Chill Points”. Mean and standard deviation for each chill band:

It does look like (if anything) there’s a tilt, top left to bottom right. It certainly isn’t tilted the other way. And as the respondents become chiller, the spread tends to narrow. Also interesting is that the effects of stressors are known to be greater when the subjects feel that they are not in control. The spread seems to widen quite suddenly as soon as the respondents’ perceive themselves as having nett negative chill. (I worded some of the questions negatively to stop people automatically clicking a happy or sad column, so this pattern is, I think, authentically emergent.)

Of course, we’re talking about gross external stressors here, rather than the fine grain establishment of positive self-confidence that I argue makes a big difference. But this graph is certainly enough to keep me interested in the EFT, particularly since the questions are a very crude probe of personal stress levels, and the conditions for the test are quite uncontrolled. There is no standardization of display size or cleanliness, mouse type, use of a mouse mat, lighting conditions, time of day, practice runs and so on. Better control of such factors might sharpen this pattern.

Also interesting, here’s a similar graph that only shows the entries with Before and After scores, organized by the number of destressing exercises the person did between the tests. The mean and standard deviation of the Before scores is shown in red, with the related After values next to it:

Curiously, even doing zero exercises improves the respondants’ scores, while doing 3 or more even benefits the Before scores :-) I suspect that what we are looking at here is an effect of time spent pondering the reasoning in the Introduction, or it might just be that some people misunderstood the test instructions. If they did the test twice, one straight after the other, we’d expect them to always do better the second time. If they then filled in the activities they normally engage it, the reduction of red and green scores with more stress reducers in play is interesting. Stressors make scores bigger, Destressors make them smaller.

You can download the raw data in file eft27jan2008.txt. The source code for the graph drawing program is EftAnalyzer.cpp. The program uses the wxWidgets graphical toolkit, which you should be able to download and build out of the box using any common OS and C++ compiler. The easiest way to get the analysis program building is to build the minimal sample, then cut and paste the source into minimal.cpp. It’s all in one file to facilitate this, and the existing minimal sample project files for the various compilers and IDEs will work just fine.

Neuroscience, All Postings

Obstructing and Assisting Effective Thought

This week I saw a quote from Marvin Minsky, said to be from his book The Emotion Machine (which I haven’t read yet):

… the cascades that we call Suffering must have evolved from earlier schemes that helped us to limit our injuries — by providing the goal of escaping from pain. Evolution never had any sense of how a species might evolve next — so it did not anticipate how pain might disrupt our future high-level abilities. We came to evolve a design that protects our bodies but ruins our minds.

I know exactly where he’s coming from. Last week a bit of a dental filling chipped off, resulting in more pain than was reasonable for such a small thing. Something was clearly Very Wrong, so it was straight round to the tandarts (Flemish dentist) for me. Three root canals later I was nursing a wicked post-surgical insult to my jawbone, and feeling very glad that I avoid taking Paracetamol unless I really, really need it. All told I spent a week quite unfit for anything involving a brain.

This was a timely reminder that although the issue of background stress making the prefrontal cortex unavailable for programming work is central to this blog, there are other things - such as pain - that can also obstruct or assist intellectual function.

The Boston Globe recently featured an article, Don’t just stand there, think, describing studies that show clear improvements in performing a variety of intellectual tasks when the subjects were encouraged to move while performing them:

The brain is often envisioned as something like a computer, and the body as its all-purpose tool. But a growing body of new research suggests that something more collaborative is going on - that we think not just with our brains, but with our bodies. A series of studies, the latest published in November, has shown that children can solve math problems better if they are told to use their hands while thinking. Another recent study suggested that stage actors remember their lines better when they are moving. And in one study published last year, subjects asked to move their eyes in a specific pattern while puzzling through a brainteaser were twice as likely to solve it.

Of course, we programmers already know about this. Here’s the excellent Mr. Randall Munroe referencing it:

It’s even got a name, and a Wikipedia entry - stimming. My favourite stim involves glancing pats to the back of my head - a motion not far removed from shining a shoe. It’s remarkable that I’ve still got plenty of hair on the back of my head, really. So it’s nice to see that there is now quantitive confirmation that stimming helps thought, because it’s one of those things that reveal the strange disconnect running through a society where most of the people aren’t in a position to access their full faculties, most of the time. For example, here’s Dr. Russell Barkley, an ADHD expert who doesn’t seem to recognize anything but focussed attention, getting very worried about stimming on his ADHD Fact Sheet:

2. Excessive task-irrelevant activity or activity that is poorly regulated to the demands of a situation. Individuals with ADHD in many cases are noted to be excessively fidgety, restless, and “on the go.” They display excessive movement not required to complete a task, such as wriggling their feet and legs, tapping things, rocking while seated, or shifting their posture or position while performing relatively boring tasks. Younger children with the disorder may show excessive running, climbing, and other gross motor activity. While this tends to decline with age, even teenagers with ADHD are more restless and fidgety than their peers.

Crikey! We can’t have people tapping things or shifting their posture, can we? Where might it end? This kind of thing seems so silly, the so-called expert’s ignorance seems downright willful. This is the kind of thing which makes me think a lot of the ADHD debate is actually based in neurotic responses to healthy children on the part of adults trapped in focussed attention, and demonizing those with mismatching social stress levels as I describe on the pages The Dreaded Jungian Backlash and Other Applications. To add weight to this, it’s worth realizing that many people who technological Westerners regard as primitive encourage children to rock as an aid to study, rather than calling it a “symptom” and reaching for the Ritalin regardless of the effect. Check out this sobering bit of video which lasts for less than a minute, half of which is caption:

The really interesting thing about this is that once we know about it, this disconnect can be found in other places, allowing the possibility of understanding things which are unintelligible without allowing for it. For example, although Dr. Barkeley recognizes focussed attention but seems unaware of the existence of juxtapositional awareness as leveraged by effective programmers and artists or the motile component of effective thought, there are traditional schools of psychology which do describe these things. For example, 20th century esoteric teachers George Gurdjieff and Rudolph Steiner both continue to attract interest from many hackers I’ve interviewed over the years. They both claimed that human consciousness is the result of the interaction of three distinct subsystems, and encouraged students to distinguish the operation of each in order to improve their awareness. Gurdjieff describes three main centres:

  • Intellectual or thinking center. This center is the faculty which makes a being capable of logic and reasoning. This one is located in the head.
  • Moving or physical center. This brain is located in the spinal column.
  • Emotional or feeling center. This faculty makes beings capable of feeling emotions. This brain is dispersed throughout the human body as nerves which have been labeled as the “nerve nodes” . The biggest concentration of these nerves is in the solar plexus.

Note that the “emotions” of the feeling centre are not the base, reactive emotions such as hunger, fear, lust and so on. Those are handled by the moving centre. Instead the feeling centre is about impressions - stuff that we get in an “all or nothing, insight kind of way”. It seems to map very well to what I’ve called the juxtapositional faculty. Does the prefrontal cortex integrate processing from the solar plexus? Recently we’ve realized that a lot of processing does go on in the enteric nervous system - nerves surrounding the gut. As the Wikipedia article says:

There are several reasons why the enteric nervous system may be regarded as a second brain. The enteric nervous system can operate autonomously. It normally communicates with the CNS through the parasympathetic (eg, via the vagus nerve) and sympathetic (eg, via the prevertebral ganglia) nervous systems. However, vertebrate studies show that when the vagus nerve is severed, the enteric nervous system continues to function.

The complexity of the enteric nervous system is another reason for its status as a second brain. In vertebrates the enteric nervous system includes efferent neurons, afferent neurons, and interneurons, all of which make the enteric nervous system capable of carrying reflexes in the absence of CNS input. The sensory neurons report on mechanical and chemical conditions. Through intestinal muscles, the motor neurons control peristalsis and churning of intestinal contents. Other neurons control the secretion of enzymes. The enteric nervous system also makes use of the same neurotransmitters as the CNS, such as acetylcholine, dopamine, and serotonin. The enteric nervous system has the capacity to alter its response depending on such factors as bulk and nutrient composition.

It seems to me that the empirical knowledge of this stuff which Gurdjieff (apparently) gained from traditional Orthodox Christian, Buddhist and Dervish sources describes what we experience (unless we are trapped in focussed attention) and is now at least partially confirmed by modern science. Perhaps it isn’t so surprising that thousands of years of study actually produced something useful!

In the Google book Knowledge of the Higher Worlds: How Is It Achieved? Steiner describes:

… the thinking-brain, the feeling-brain and the willing-brain.

The bizarre thing about Steiner is that he discusses the three brains in the context of clairvoyant perception! Are we talking… like… spooks or something here? I strongly suspect that the answer is no, and here we have an opportunity to resolve one of the great mysteries of the ages. Remember Barkley, recognizing only focussed attention. To him, the fruits of juxtapositional awareness which we can all enjoy if we are sufficiently destressed, and which arrive in an “all or nothing, insight kind of way” without any stepwise “working out”, are mere hallucinations. The human-normal sensibilities of people who are in a position to do juxtapositional awareness are just “procrastination”. Creative people “make things up” - probably as a result of a “failure of inhibition” (Mozart’s reams of note-perfect new music were a kind of complicated epileptic fit). This kind of model is actually quite common - is the way that ISO 9001 is usually applied anything other than an infinite regress of people standing behind other people and telling them exactly what to do?

To such a person, an experienced professional who can look at a spec and know that there is something really difficult in there without yet being able to say exactly what it is, might as well be “clairvoyant”. But there is nothing spooky going on - just the amazing effectiveness of the neural net between our ears when it has a chance to work correctly. This also explains why some people are so keen to advance theories of how creative people “make things up”, when every creative person denies making things up, instead insisting that they just see and attempt to capture what is there. From the limited point of view of focussed attention such statements make no sense, and can be dismissed out of hand!

With these ideas in mind, it’s actually possible to make sense of some of Steiner’s writing in informational and structural terms. I’m not going to offer an example here - or even encourage you to find one - because of the other big problem with Steiner. He was writing in High German, to a very straightlaced turn of the century upper middle class audience that makes Barkley look positively funky in comparision - and the writing gains nothing in translation. Turgid, pompous, long-winded and obfuscated are just some words we might apply to it here and now. But my central proposal here, that Steiner is describing improved cognition of this reality, rather than focussed cognition of a different reality, might be of interest to some people. Of course the audience, knowing only focussed attention, took the description of improved cognition and “interpreted” it as a description of an alternate, spook reality. The relevant graph is not this one from Randall Munroe, correct though it is:

But instead this one from Dehnadi and Bornat’s fascinating paper The Camel Has Two Humps, where they show the two distinct clusters of excellent and average programmers:

Human psychology seems to be a better fit with this bit of pop culture from The Shamen featuring Jhelisa Anderson than the theories of the ADHD experts - so stim away and if Dr. Barkley doesn’t like it… it’s another reason to get rid of open plan!

Neuroscience, All Postings, ADHD, Stress Addiction

Culture Influences Brain Function?

There’s a very interesting abstract, MIT: Culture influences brain function, which says that newly arrived East Asians are better at visual tasks requiring context sensitivity than Americans, but the Americans are better at tasks requiring absolute judgement.

The authors relate this to American culture emphasizing the individual, East Asian culture emphasizing the collective. I suppose it’s possible, but there’s an obvious alternative - sadly cruder and less poetic.

The Wikipedia entry for East Asia says that culturally, East Asia consists of societies:

  • displaying heavy historical influence from the Classical Chinese language (including the traditional Chinese script)
  • Confucianism and Neo-Confucianism
  • Mahayana Buddhism/Zen-Chan Buddhism
  • and Taoism (Daoism)

Politically it consists of:

  • People’s Republic of China (including the Special Administrative Regions of Hong Kong and Macau)
  • Republic of China (Taiwan)
  • Japan
  • North Korea
  • South Korea

Now the Japanese gave Karoshi to the world, so we mustn’t over-generalize, but there’s a great many agrarian and early industrial people in there, as well as Buddhist, Zen and Taoist influences. It’s reasonable to assume that many East Asians have lower habituated background stress than most information age Americans (we’re back to Whybrow’s American Mania), and the cognitive differences reported would fit right in.

Neuroscience, All Postings, Stress Addiction

Oh My - It Really Is A Padawan’s Hat!

So I read about a paper co-authored by Natalie Portman (who played Padme in Star Wars and the girl in V for Vendetta) under her True Name of Natalie Hershlag.

I figured that had to be worth a click, and found Frontal Lobe Activation during Object Permanence: Data from Near-Infrared Spectroscopy. They used strong near infrared light as a non-invasive way to detect blood flow in the prefrontal cortex, and showed that where infants are able to track things that they can’t see any more, the PFC is developed and active. Quite on topic! Then I reached this image:

I mean… come on:

Then it occurred to me that we can explain a lot of tales of mysterious, spooky intuitive consciousness with the idea that the full range of PFC functionality is not available to most of the people most of the time. A Jedi Master would probably be interested in monitoring a padawan’s PFC function. So Ms. Hershlag would be quite entitled to claim her apparatus really is a primitive padawan’s hat, thank you very much!

I tried to program by not thinking once. You know - like Tommy and Luke Skywalker. I was really tired and there was just one thing left to do. I had to convert an integer to an appendix designation. It’s horrible and not like proper counting. The first appendix is “A”, then “B”, through “Z”. Then it goes “AA” through “AZ”, “BA” through “BZ” and so on. The carries are all wrong. (And it might have actually mattered in service. Someone could reasonably collate a large amount of evidential material as lots of appendices.)

I didn’t have the mental stamina left to compose by iterating juxtapositional and focussed attention as I normally would, so I brought to mind the recursive printf(3) definition and went for it:

string PcmProposalImp::MakeAppendixDesignation(int Order)
   return(((Order / 26) > 0 ? MakeAppendixDesignation((Order / 26) - 1)
                            : string("")) +
                              char('A' + (Order % 26)));

It worked. How spooky is that? I don’t advocate programming with eyes closed in general, but it might just be that my PFC in juxtapositional mode is actually better at all the Obi Wan gotchas in there than my PFC in focussed mode is. I usually get that stuff exactly wrong, even when I remember I always get it exactly wrong.

Phew. Reached the end without mentioning Attack of the Clones.


Neuroscience, All Postings, Programming