Embedded Figures Test - Tracking Confirmed

I’ve received a very encouraging email from an academic psychologist who is kind enough to have patience with my unusual thread of research, confirming that the distinct signature that has emerged in the embedded figures test results is also found in other tests of cognitive flexibility, which fulfills the strong requirement described in my previous posting Embedded Figures Test - Distinct Signature Emerges. Following the etiquette of personal correspondence I’ll just quote the meat of it:

That’s really interesting! We are actually resubmitting a paper finding the exact same best performers/worst performers dichotomy with stress in a set of college students. Our studies have focused on young volunteers, generally. Also, I like the spatial aspect of your task. Whenever we tried spatial tasks, I suspected our lack of findings are due to power issues. Your evidence supports this. The one obvious weakness is, as you point out, the lack of control of testing conditions, but so long as these do not differ systematically in some way, then you can overcome it with statistical power.

That’s very encouraging! I don’t suppose there’s a preprint on arXiv or similar is there?…

Not yet- but remind me and I’ll be glad to send one once I know I have a final version!

I shall certainly be looking out for the preprint, and will post details as soon as possible!

So not only does the EFT track other tests of cognitive flexibility as modulated by stress, so long as it is applied to big enough groups it also reproduces the different responses seen in different levels of performance. It also predicts a similar signature where the subjects’ age modulates the score but amongst the worst performing third, while stress affects the best performing third. It looks like I’m actually ahead of the game on this one!

A useful exercise now will be to go and do some stats on the raw data and try to make a statement about how big is big enough. How many people need to be in a work group before we can apply the EFT, take the best performing third, and predict that they are under occupational stress if their mean score is > 2.5s?

At this point it would be worth saying something about why I selected the EFT as a potential test of cognitive flexibility that would be repeatable per individual and automatable. This will put some of the important ideas I’m keen to convey in this blog into a concrete form.

The test is normally thought of as measuring a quality called field (in)dependence. The idea is that some people tend to see the parts and so can easily spot the one part they are looking for, while others see the whole and so find it hard to spot a specific figure. This idea then extends by some sort of loose analogy to poor scorers being “sociable” because they are somehow more conscious of the group of humans around them, better scorers being “misfits” - and furthermore to better scorers being more able at natural sciences because they are natural reductionists. In the semi-scientific world of recruitment psychological testing, better scores are supposed to identify misfits who paradoxically have good leadership potential.

I first encountered the test in the recruitment context, and it seemed to me that when I did it, I looked at the whole and allowed the part I wanted to pop out at me. It was a test of my visual/spacial parallel processing capacity, which is strongly related to the capacity for juxtaposition of complex elements which I emphasise as important in complex design work, and hence cognitive flexibility. As to all the literature discussing field (in)dependence, I didn’t believe a word of it, and the proof is now available. If the quality of field (in)dependence was a personal trait as believed, then scores would be constant per individual, and not modulated by occupational stress as I have shown they are.

So why are poorer scorers thought of (and indeed observed to be) more social, and better scorers observed to be better at natural sciences? I believe the answer lies in the background psychosocial stress maintained in most cultures, which has a neurochemically addictive component. Many people are (without realizing it) hooked on performing low level stressful social exchanges, monitoring each others’ compliance to arbitrary social mores, worrying about it, and so maintaining a “stress economy”. People who participate in this are seen to be more social because non-participants are unconsciously resented for not exchanging “stress hits” with others. Unfortunately participation in the stress economy renders them less able to do complex work, and in extreme cases can lead to perverse group behaviour. Groups suffering from this effect are not aware of it, because the changes to brain chemistry are similar to those found in (for example) cocaine addiction - which is known to produce unawareness of problems and unjustified feelings of complacency. Perhaps the poorest performing third who are not made any worse by occupational stress are already weakened in terms of cognitive flexibility because the stress they don’t get at work, they find somewhere else! When it comes to natural sciences, the belief that reductionism is all is mainly found amongst people who don’t do them! In reality nature is doing everything at once, and the skill which psychologist Simon Baron-Cohen calls systematizing involves seeing the key relationships in multiple phenomena occurring in juxtaposition.

The important idea then, to take away from all of this, is that being good at complex work is neither reductionistic nor a property fixed at birth. It’s something anyone can get better at, by learning how to reduce their psychosocial stress to the point where they can use the full faculties they were born with. When it comes to groups, the amount of stress (bad) and the quality of debate (good) within a group is amplified by re-reflection within the group. Initiatives to reduce background psychosocial stress such as I describe in the Introduction can bring runaway benefits to a groups even as they make only incremental benefits to each member of the group. Beware though of The Dreaded Jungian Backlash - the buildup of unconscious resentment directed against whole groups who drop out of the stress economy. The best counter to this effect is to make everyone involved - particularly the senior management of work organizations - explicitly conscious of the effect. The Backlash only presents a danger when people do not understand the unconscious origins of such resentment directed against teams enjoying improved performance, and which (as I have described) can involve clerical and janitorial workers who are in no way competing with the improving team.

Neuroscience, All Postings, Stress Addiction

Embedded Figures Test - Distinct Signature Emerges

This is an update to a previous post reporting some very interesting results from the Embedded Figures Test, now that nearly 500 results are in. A specific signature has emerged, which (if it remains under more controlled conditions) would enable a clear comparison with existing tests of cognitive flexibility.

To recap: The Neuroscience page describes lab tests which show how cognitive flexibility is adversely affected by even slight stress, and the Implications for Software Engineers page discusses the need for exactly this kind of flexibility if we are to juxtapose multiple considerations in our minds and be good at programming - or any other kind of creative work.

The problems with most tests of cognitive flexibility are that they are costly to administer and they are not repeatable per subject. Once a subject has seen a given test it won’t be a puzzle again, so we can’t apply it once, alter the conditions and try it again. To apply this stuff in industrial contexts, some way to objectively identify high value initiatives quickly and efficiently would be really good.

The EFT resembles some known tests of cognitive flexibility, it is repeatable per individual and can be automated. So if it does vary with stress then it could be very useful. Of course, the experiment on this blog is only indicative - because the self-assessment of stress is quite simplistic and the test conditions are not closely controlled it can’t produce definitive answers. For one thing, if the mouse and screen that people are using are very different, that could easily distort the results. But if it looks promising, it might give professional psychologists an incentive to look at it in more controlled conditions. Hopefully the large number of results now collected will average out things like the responsiveness of mice.

Within these limits, the results so far look quite promising - and a specific signature has emerged. As before, you can download the raw data eft11oct2009.txt and the analysis program: EftAnalyzer.py. On my Mac at least, I simply downloaded and installed wxPython, and the program ran from the command line. Then use File/Open… to select the data file. The graphs here are screenshots from MacOs X. You can also grab the data, program, and copies of all 24 graphs I’m currently drawing in one download: eft11oct2009-all.tar.gz.

There are 499 contributions in the database. 438 of those pass validation as sensible. Comparing with the numbers in the previous post it’s striking how the larger sample has scaled in proportion. It’s still a sample strongly biased towards male geeks - 391 males and 47 females, 270 geekly occupations. While there’s no evidence that people who feel nauseous when bored have EFT results which are different to those who don’t, I’m amazed that 56% of respondents experience the nausea effect! This may be something worth looking into further. (I asked about it because nausea is a side effect reported by people taking dopamine raising drugs for Parkinson’s disease, dopamine is raised by stress, and I suspect that groups become habituated to raising dopamine by subjecting themselves to miserable, boring meetings.) Here are the numbers:

Most respondents scored between 2000 and 3999 milliseconds per figure, but there is a “long tail”:

There’s also a good spread of self-assessed occupational stress, slightly biased towards the unstressed side of the survey questions. (The positive or negative wording of the questions is mixed to discourage people from just clicking down a vertical line. A strong unstressed response gets 2 “chill points”, and strong stressed response gets -2 “chill points”. Weaker responses get 1 or -1 “chill points”.):

It would be nice to have a wider spread of respondent age, but there are many in their 20s, 30s and 40s, over 20 in their 50s and a few in their 60s, so the data does represent a good spread:

An important graph in this analysis plots the average age of people in each band of self-reported stress. There’s no correlation at all, which is important because later we’ll see correlations between age and results, and stress and results. If (say) older people were reporting more stress, then we couldn’t talk about both effects - either age or stress might be the important factor:

The relationship between age and score on the test is very interesting. Overall, there is a small correlation with time taken to see the figures tending to increase as the respondents’ age increases:

The surprise comes when we look in more detail at the best, central and worst scores within each age band. The best performing third take the same time whatever their age. If anything, the best performers actually get a bit better at it as they age (although there are few data points in the older groups, and we might imagine that senior citizens who are reading blogs and doing cognitive tests are the kind of people who keep themselves alert):

Age makes no difference at all to the performance of the centrally performing group in each age range (there are fewer data points because the first remainder after dividing each age band into three is assigned to the “best”, the second remainder is assigned to the “worst”, and so the “central” group ends up the smallest):

Now compare with the worst performing third. The worst performers show a clear increase in time taken to see the figures as they age, which is not found in the best and central performers. This is part of what I’m calling the “signature” - if existing tests of cognitive flexibility show a similar drop in performance with age amongst the worst performing third only, then thats a good argument that the simple test tracks the more complex and less repeatable ones:

A similar effect occurs in the variation of response time with self-reported stress - and this is the meat of the experiment. An important difference is that with age, the worst performers get much worse with age, but with stress it’s the best performers who perform less well under stress:

While the central third spread out at around the same point (-5 Chill Points or mild stress) but also spread out if they report a very unstressful environment (10 Chill Points or more):

Meanwhile the worst performing third in each stress band is all over the place. Perhaps some people are just really bad at this, and the amount of stress they are under doesn’t make any difference at all:

So this is the other half of the “signature”. The test shows a reduction in performance amongst the best performing third under stress. Expressed predictively, if we were to administer the test to a work group, and look at the numbers associated with the best performing third, we could predict that they are under occupational stress if they were mostly over 2.5 seconds.There are a couple of other interesting graphs. A few respondents reported scores before and after stress reducing exercises, and indeed the scores do seem to be improved - except the person who did 8 stress reduction exercises, which perhaps is quite stressful in itself!

There was a surprise (at least to me) in the results for people who reported use of psychotropic drugs. Nothing makes much difference except the peak for alcohol users, which is shifted to the right compared with everyone else, and the frequency distribution above. There’s also a bit of a rightward shift for marijuana users, but it isn’t as pronounced. Can it really be that alcohol use has a general effect on a person’s cognitive performance, irrespective of whether they could pass a breathalyzer test at that precise moment?

So the next thing to do is see if I can interest professional psychologists in the EFT as a repeatable and low cost alternative to existing tests of cognitive flexibility. If the EFT signature of the worst performing third getting worse with age, and the best performing third getting worse with stress is also found in existing tests, then it might be quite easy to move this kind of work into an industrial, field context in a quantifiable way, and then start looking at things like fault report counts and schedule reliability as EFT results change.

Neuroscience, All Postings

Linux Kernel Modules in Haskell - Ubuntu 9.04 Details

In a recent blog posting Tom at Beware the Jabberwolk introduced a fascinating new idea - writing Linux kernel modules in Haskell. He’s maintaining an up to date version of his work in the Haskell Wiki entry Kernel Modules.

I had a few problems setting up the build environment as Tom described it, and the main purpose of this posting is to describe exactly what I did to make it work on a fresh Ubuntu 9.04 default installation. If you start from scratch and do exactly as I describe here, you should get a working build too.

Before I get into the details, why am I interested in this? There are potential benefits and drawbacks to writing kernel modules in Haskell, and the benefits come down to improving the chances of correctness and reducing the number of times we have to go round the testing cycle.



There’s something about programming in Haskell which really needs to be experienced to be believed. Writing a Haskell program and then getting it to compile can be be a difficult and frustrating process, because the language forces the author to think things through more carefully. Expressions describe relationships, not procedural sequences of stuff to do. Every if must have its else. But when the program finally compiles and the time comes to run it - that’s when things get really weird. It works. This can leave a procedural programmer feeling profoundly disoriented. We are used to spotting the first bug or two, correcting them and rebuilding. After a few times around the cycle we have some confidence in the bits that are working, and know which bits still need work. When it just works, what do we do?

Haskell front-loads the development process. It is not a magic bullet. Errors in the programmer’s original intent will still be there, and (for example) I’ve made some ugly displays by incorrectly composing graphical widgets. But the errors are hugely reduced, as are the number of trips around the edit/build/test cycle. When that cycle includes a reboot, as bug fixing in kernel mode code often does, front-loading could significantly reduce total development time and programmer frustration.

There are three cases where writing device drivers in Haskell could be very attractive:

Mission Critical Drivers. I once wrote some mission critical drivers for a major telco. There would only be a few thousand running instances, but they would be handling things like calls to the emergency services. If they failed, people trying to call the fire brigade could find themselves cut off and having to restart the process, in a situation that was already dangerous and distressing. There wouldn’t be the huge installed base which sometimes is available to discover errors, and even if there were it would hardly be a good idea to use the first few dozen unnecessary deaths to iron out the bugs. These drivers had to be right, working correctly in themselves and doing nothing to compromise the platform, from the first day they were installed. Anything that can improve assurance of correctness in situations like this needs to be looked at.

Specialist Drivers. Process control and test rigs in manufacturing industry and sensors in labs often involve specialist hardware built for the specific purpose, often by small engineering firms, which need drivers. In these cases the driver authors will often know the exact properties of the computers the drivers will run on and their intended use - which negates some of the problems discussed below. Shorter development times could reduce cost and improve the business case.

Development Support. During the development of novel hardware or user mode software which must interact with kernel modules, the first release is rarely the last. Each release often involves changes to the kernel modules, which have to be made by a different development team. It all takes time. The ability to produce new kernel modules quickly during the hardware development process would be useful in itself, even if the final release has a traditional kernel module written in C.



Haskell is a higher level language where we specify what we want rather than exactly how to do it, and let the compiler and runtime sort out a lot of the details for us. In this, it’s a bit like SQL! Unfortunately this leads to some obvious potential problems in kernel mode work, although some of them might not be as bad as they first seem.

Speed. There seems to be a general benchmark that J. Random Haskell Program runs about half as quickly as the same thing coded in C. One of the attractions of Haskell is that programs are naturally more amenable to find-grain parallelization than procedural programs, so when we are interested in using all the cores in modern CPUs, T x 2 / 8 cores is a good deal. Unfortunately going parallel is not such a good idea in the kernel, even if the House based stuff Tom described could do it. On the other hand, techniques for making Haskell go quicker seem to be pertinent to device drivers such as avoiding laziness (see below) and using unboxed values and machine types (we’ll be moving lots of buffers of data around). So there is an issue here which might be a real problem where screaming top performance is needed in the latest generation video driver, but might not be as bad as it seems and perhaps acceptable in less performance critical cases.

Garbage Collection. Haskell does its own memory management and collects garbage. We’ve all seen Java applications suffer a hiatus because the garbage collector has kicked in, and I doubt Linus Torvalds would be amused by a suggestion that we should garbage collect in the kernel. And yet… I counted how many calls to kfree() occur in the Linux source. There are over 18,000 of them, and that doesn’t include cases hidden in #defines. We already do lots of memory management in the kernel. Garbage collection has a bad name because of the hanging Java VMs, where it kicks in infrequently and then has lots of work to do. Conventional C memory management is amortized over operations more evenly. I suspect that frequent calls to the Haskell runtime’s System.Mem.performGC() routine would ease this problem.

Unexpected Strictness Problem. One of the properties that make Haskell powerful and concise is lazy evaluation. Haskell allows the use of things like infinite lists, which it only evaluates when it needs their values, and only as much as it needs. Unfortunately it’s rather easy to make a little change to a program which makes it need lots of values at the same time, so a memory efficient lazy program becomes a memory guzzling strict one, and the process size suddenly balloons. Not good at the best of times, a disaster in locked down kernel memory! I think the solution to this one is simply to avoid lazy evaluation - develop a coding style where we always write kernel modules in a strict style in the first place, which will likely improve efficiency anyway, since laziness has a performance overhead.

The Geek Honeypot. Haskell is fascinating, in its potential and intellectual challenges. Even the idea of writing kernel modules in it doesn’t seem so crazy. Remember that the language is a honeypot for the geekish mind, and retain a healthy degree of skepticism!


What About F#?

All this talk of writing Linux kernel modules in Haskell raises an obvious question: Windows drivers are harder than Linux ones, and F# - a functional language - is one of the heads of the .NET hydra. Might there be benefits in writing Windows drivers in F#? I fear the answer is no. The whole point about Haskell is that it is a pure functional language. This is what gives it the added rigour, and also produces the intellectual challenge of using it. From what I’ve seen (and I’m not an expert), F# is not a pure functional language. It allows us to work procedurally and use functional features when it’s convenient to do so, like Python or Perl 6. This gives power with a much easier learning curve, but takes away exactly the property that makes Haskell so attractive for kernel mode code. Probably a better approach would be to try the idea out in Linux, and if it proves to be good, recreate the same environment in the Windows world.


How to Perform Tom’s Build on a Fresh Ubuntu 9.04 Installation

Download and do a default installation of Ubuntu 9.04. The Update Manager will offer quite a lot of updates now that 9.04 is getting a bit old, some of them security ones, but when I was testing these notes I closed it without installing any to ensure a known state.

I’m using VirtualBox for these experiments. Virtual machines are wonderful for getting installation and configuration stuff right, because it’s so easy to copy the hard disk and backtrack to unambiguously known points. I needed to install the kernel headers and build essentials really early so I could get the VirtualBox tools working, so this is the first command to give:

$ sudo apt-get install build-essential linux-headers-generic

Next there’s a bunch of packages to load. Do this using the Synaptic Package Manager - from the menu bar:

System/Administration/Synaptic Package Manager

Then use the Quick search and select from the packages offered - always accept any dependencies :

Quick search Select
autoreconf autoconf2.13
ghc ghc6
alex alex
happy happy

Then apply the changes. Once they have been applied, quit the Package Manager so that the next command will work.

You will need gcc 4.4.1. To get this, give the command (it’s all one line):

$ echo "deb http://us.archive.ubuntu.com/ubuntu karmic main restricted" | sudo tee -a /etc/apt/sources.list && sudo aptitude update && sudo aptitude install gcc

It’s necessary to make a global change to the system headers to prevent calls from the standard IO library to stack protection routines which are not available to kernel mode code:

$ sudo vi /usr/include/features.h

In the section with many #undef lines, add the line:


Next download the tarballs and other files you will need. In Tom’s instructions this is done at the appropriate points in the build, but I found it easier to grab everything once and script the build process as much as possible. I’ve also slightly changed Tom’s patch files to remove common allocations and profiling calls which were preventing the built module loading on my Ubuntu 9.04 installation:

$ mkdir Tarballs$ cd Tarballs
$ wget http://web.cecs.pdx.edu/~kennyg/house/House-0.8.93.tar.bz2
$ wget http://www.haskell.org/ghc/dist/6.8.2/ghc-6.8.2-src.tar.bz2
$ wget http://www.haskell.org/ghc/dist/6.8.2/ghc-6.8.2-src-extralibs.tar.bz2
$ wget ftp://ftp.gnu.org/gnu/gmp/gmp-4.3.1.tar.bz2
$ wget https://projects.cecs.pdx.edu/~dubuisst/hello.c
$ wget https://projects.cecs.pdx.edu/~dubuisst/hsHello.hs
$ wget http://the-programmers-stone.com/wp-content/uploads/2009/10/hghc.patch
$ wget http://the-programmers-stone.com/wp-content/uploads/2009/10/support.patch
$ wget http://the-programmers-stone.com/wp-content/uploads/2009/10/hlkm-build
$ cd ..

Now you should have a directory Tarballs below the one you’re in (probably your $HOME), and be ready to perform the build. There are a few variations from Tom’s instructions - for example there are changes to more than one Makefile in the gmp subtree - but they are scripted, including all but one of the manual edits. You will need to edit the script and change $HOUSE_DIR and $WDIR to point to your $HOME and not my /home/alan unless you also happen to have a most excellent name :-)

$ vi Tarballs/hlkm-build
$ chmod +x Tarballs/hlkm-build
$ Tarballs/hlkm-build

The whole build takes about half an hour on a 2GHz machine. There is one point when the script drops you into vi, and just as Tom says, you need to add the line:

startupHaskell(0, NULL, NULL);

Before the call to rts_lock() in the hello() function, and after the call to rts_unlock() in the goodbye() function add the line:


Write the file back and allow the script to complete. You should then be able to use one xterm to:

$ tail -f /var/log/messages

and in another xterm:

$ cd House-0.8.93/Wdir
$ sudo insmod module.ko

and see the messages from the kernel module appear in the log messages! Next step - make it do something more interesting!

All Postings, Programming

Dirty Little Secrets - Response to Grady Booch

Skynet 5 has been fully operational for less than a month, yet already one of our finest minds is chained up in a bunker somewhere, and has been replaced by one of those Terminator skin job thingies.

It would be hard to overstate the contribution of Grady Booch to the young discipline of software engineering. From his own practical experience, his ever-questing mind has observed and crystalized deep wisdom into a collection of writings and products which have benefitted us all. These include the object oriented bits of the Unified Modelling Language (UML), the Rational Rose round-trip tool and his masterwork, Object-Oriented Analysis and Design with Applications. In his definitive book The C++ Programming Language, Bjarne Stroustrup says Booch is the only person worth reading on object oriented design.

So I was astonished to see an interview, Software’s Dirty Little Secret on the Scientific American website. The interview seems wrong-headed in so many ways, yet so typical of the usual unhelpful dross that we see (way below the standard of normal Booch), that I’ve been thinking about it for most of the last few days.

The essence of the interview is hardly novel. The Booch 1000 says that unlike other disciplines, software engineering is a mess because there are no rules for its practitioners to follow. If there was a rulebook, software would not be a mess. That this situation exists is a dirty little secret kept by the software engineers. Oh the shame of it all!

Yes, We Have No Bananas

First off, the claim that:

In other disciplines, engineering in particular, there exist treatises on architecture. This is not the current case in software, which has evolved organically over only the past few decades

is absurd. When I Googled software engineering I got over 54 million hits! We certainly aren’t wanting for rules, proceduralisms, methodologies. Every one of them has been touted as the solution, and still it seems some people can’t understand that there is No Silver Bullet.

Perhaps the Booch 1000 is a member of the this time for sure! school of thought.

No-One Knows Everything

We don’t actually do so badly compared to other disciplines. With thousands of years of prior art behind them, your local builder can construct staircases with good repeatability. We can do the same thing with graphical user interfaces. After all, when Apple implemented the iPhone there wasn’t much doubt that the GUI could be made to work! The builders can do roofing and we can do database engines. In fact, considering some of the debacles I’ve witnessed I’d say we’re rather better at doing database engines than builders do roofing. Builders do commodity guttering, and we do on-the-fly data compression the same way. The fatuous belief that other disciplines are better at this kind of thing comes in part from the fact that builders have to spend time repeating jobs they have done many times before, while often we just link a library. If we typed in the code every time, we’d quickly see the true comparison. (There are some firms which exploit the “business world’s” limited understanding of this reality by typing in the same billing system over and over again, but they are one of the perversities of the field.)

Linking a library is like your local builder following the building regulations. But the building regs only go so far, covering small, localized issues. Outside the limited contexts where the rules can describe the situation exhaustively, the builders can easily get into trouble.

The London Millennium Bridge (image from the Wikipedia article) was a high visibility show-off project using a novel design. As soon as it was opened to the public it revealed a design flaw leading to unpleasant horizontal swaying. Significant modifications were required before people could cross in comfort.

We know about resonant effects. Military formations know they must break step when crossing bridges or they might shake them to bits. Everyone has seen the film of the Tacoma Narrows Bridge:

Beyond bridges, advanced work has been done swinging massive weights around to protect tall buildings from resonant effects during earthquakes. Even so, no-one anticipated the high visibility cock-up in London. As software engineers often do in our young discipline, the designers were trying something just a little new. There was no prior art to guide them, no simple procedures, scripts, building regs for them to follow. And in this case, they ran into trouble. As we often do. The proposition that if only we had a book of rules like the builders do, we’d be perfect like them, is thus shown to be doubly bogus.

Everyone Suffers Feature Interaction

But it doesn’t stop there. Even doing simple things that builders have been doing over and over again for thousands of years, and following the building regs at every point, builders can still be caught out by something we often see too - feature interaction.

These photos below show two reciprocal diagonal views across a pleasant, recently built housing estate that I know. There is a central green space, with houses on two sides:

The other two sides have a stone wall which defines the space and walls the green off from other houses, themselves built around their own greens:

As you can see, the developers went to a lot of effort to create a pleasant and livable space where children can play safely. The houses are well constructed to provide a good quality of life for all. The only problem is a strange acoustic effect. For some reason, the square is like a whispering gallery! The conversations of people walking near to the wall can be clearly heard on the other side of the square, as if they were standing right outside the window. So little energy is lost by whatever focussing effect is in play, that the voices can be heard through the double glazing! At first it’s a very un-nerving experience. It sounds like there’s someone standing there and talking in a muffled way such that their words can’t be discerned, but there’s no-one there! Each bit of the system is in full compliance with building regs, the overall design is a best effort, but this extraordinary interaction of the compliant features happened nevertheless.

Just like what happens when we try new things, this kind of thing cannot be anticipated in building works. In fact if I’d seen a photo of the wall and not known what actually happens, I’d have guessed that it would have anechoic properties:

This is actually an area where software engineers are making better progress - at least there is hope that at some point automated support for CSP theory will reduce our exposure to some kinds of feature interaction in ways that other disciplines cannot hope to achieve.

Then There’s the Cock-Ups Too…

I’ve still not finished my escalation of examples to refute the Booch 1000’s double error, claiming that if we had rules like other disciplines, we’d be predictable and repeatable just like them. The most common cases aren’t like the examples I gave above. Commonly we have rules, just like builders have rules, but at an organizational or regulatory level we choose to ignore them, or follow them in an incredibly stupid, counterproductive way!

The picture below is taken from Google Earth’s view of Stevenage, in south-east England:

There are five roads leading in and out of a roundabout (or traffic circle as amazed Americans call them). If you look closely you will see another network of smaller grey roads threaded adjacent to and flowing under the five wider ones. These are foot and cycle paths. They must have looked wonderful in the artificially constrained context of the pretty watercolours shown to the Planning Department of the local Council. Imagine the atmosphere of self-congratulation as those involved performed their administrative rituals and assured themselves of perfect compliance.

The trouble is, none of those foot and cycle paths are illuminated, and they run like culverts below the level of the road. At dusk they become a muggers’ paradise. Only people with a death wish would venture there. Worse, because of all the wonderful sunken paths, there are no footpaths running by the roadsides. Attempting to walk along the roads exposes people to multiple lanes of high speed traffic. So after dark (which happens by 16:00 hours in winter) there is no pedestrian access to huge areas of the town. It’s like living under a curfew, and as a result Stevenage is a cultural and social desert. And yet… we know all about constructing spaces where people can move about, spontaneously interact, and maintain self-sustaining communities. Spaces where businesses thrive, arts are practiced and a sense of ownership ensures that vandalism doesn’t occur.

The problem is not about having rules. The perpetrators of Stevenage have lots of rules, yet they still accomplished an entirely avoidable cock-up, of the kind which we as software engineers should be interested in, because this is the kind of preventable cock-up which we also commit with monotonous regularity. Even if we allow for the greater call for novelty in software engineering and the amount of feature interaction we are vulnerable to because of the Kolmogorov complexity of our requirements, we still see far too many of these preventable messes.

So unlike the same old, same old proceduralists I deny that we are any worse than other disciplines at achieving repeatable and reliable performance. We have nothing to be ashamed of in this respect, although we are as vulnerable as the others to endemic problems within the culture, and the great promise of our work exposes those problems more.

Don’t Forget - Success is Possible

Architects do sometimes achieve some spectacular successes. Norman Foster’s Millau Viaduct (image from Wikipedia article) is a transcendent blend of form and function that renders further comment superfluous:

I. M. Pei’s Louvre Pyramids (images again from Wikipedia) are an equally masterful blend of an aesthetic which perfectly mirrors the light precision of their context while celebrating the full richness of both eras:

While at the same time magnificently performing the task of filling a vast public area, unimagined when the original Louvre was constructed, with light:

If you haven’t done so yet, go and see it. It’s worth it. As you gaze on these wonders, remember that the vehicular traffic bourne by the one, and the mass of wondering humans hosted by the other, must be measured in the megatonnes. These are very hard working systems.

Getting Proactive

Rather than hanging our heads in shame before the other self-deluded cock-up producers in our culture, and humbly accepting whatever fatuous behavioural prescriptions they hand down to us, we should be taking the lead in sorting out our common problem.

Human culture is a complex multi-layered phenomenon which is capable of great things, but is subject to intermittent faults leading to ridiculous failure. It has something wrong with it and needs debugging. I’m not the only person to notice this - the physicist David Bohm said,

… thought is a system. That system not only includes thought and feelings, but it includes the state of the body; it includes the whole of society - as thought is passing back and forth between people in a process by which thought evolved from ancient times.

He discusses his proposed plan to look for the error in his book, Thought as a System

No rules could have led to the fabulous accomplishments I discussed above, nor yet differentiated between them and the monumental cock-up in Stevenage. Fortunately architecture benefits from its history. Students have to learn plenty of materials science and mechanics, but it is still taught as an Arts subject. The aesthetical judgement - the ability to juxtapose the elements of a problem and composition and see beauty or its absence - of the students is developed. This aesthetical sensibility is shared by great programmers (who use words like elegance to describe it). So I argue that progress in this area will most likely be made by looking at the cognitive state of the practitioners, not the shelfware they execute. It’s about how we perceive and understand the world, and we exercise this kind of paradigmatic flexibility every time we coax a messy set of business requirements into a robust and implementable system design.

We are the systems thinkers, the debuggers, the stars of applied cognitive flexibility. It’s high time we stepped out of our specialization, became fully eclectic in our considerations, and made an effort to sort this stuff out in its own terms - not by just regurgitating the failed excuses already found in the culture.

In the Introduction I propose a picture where psychosocial stress - raised by petty nagging about trivialities - acts to shut down our aesthetical sensibility by pushing us into a kind of focussed attention where working memory is reduced and we lose the juxtapositional ability. We know this happens in lab studies of stress and working memory. As a result we become less able to respond to the circumstances before us and more reliant on heuristics - rules. Beyond that, we know that stress has a similar effect on brain chemistry as addictive drugs, and battery farmed animals subjected to stress become addicted to it. So groups of people can become addicted to nagging each other about petty rules, such that they can only think in terms of rules, and their resulting reduced performance increases the stress. Rule obsession is the problem, not the solution, and it leads to a spiral of decline which ultimately destroys nations and firms.

So we keep on hearing the same old rubbish, always presented as the next big idea, and never mind that it never works. To provide an illustration of this, I’m going to quote a posting to The Joel on Software Discussion Group. The posting describes how things get when groups have free rein to indulge their stress addiction. My proposal is that the behaviours described are attractive not because they are pleasant for the people involved, but because they are stressful. So everyone knows how dreadful it is, but no-one is willing to fix it. Remember that you pay for this twice - once when you pay the victims to behave like this every day, and again when your society suffers the opportunity cost of the kind of maladministration that results. The cognitive limitations have become so widespread that this kind of attitude has become paradigmattic, so that’s it’s now hard to even question why it happens:

- The head of IT has been in government for 27+ years. She is the one making purchasing decisions and setting strategic direction. She does not own a computer or have an external email address. She does not buy on-line, and she has the web monitor set so tight you can forget about using the Internet. No IM. No webmail. Then to ensure you cannot possibly reach the outside, no non-government equipment permitted in the building without a specific exception. This will be your organizational leader as the rule - not the exception.

- Change is glacial. Two weeks I did nothing because they could not allocate a machine for me to access the network, until I had a badge. To get the machine, a form needed to be signed by my boss, his boss, her boss, and his boss. My boss walked it around, and was told that was inappropriate, and it still took 4 hours. Then it went to the help desk who took 9 business days to process and deliver the equipment. Deliver in this case meant coming to my desk and providing a user id and password to a machine sitting there. To install software requires you to have an exception form filed. Another round of 4 signatures in my department, one from security, one from the helpdesk and one from a person who no one can explain except they need to sign the exception form - six days. Complain or bother anyone about a request and it will disappear. Petty is an understatement.

- It is true that no one gets fired. But worse, making decisions is career limiting. The solution is to ensure you never do anything of risk. What you want is to be tied into a project you can claim some responsibility with, if it goes well, and disavow any relationship if it fails.

- No _one_ makes any decisions because there is safety in numbers - very large numbers. All decisions are by committee and expect it to take weeks. They spent 12 weeks, with a minimum of six people in nearly 20 meetings discussing a database key. One key. It is an extreme example but that it can happen says it all.

- If you take any type of leadership position and do anything that employees do not like - they submit a grievance. Why? Because while one is pending, they cannot do anything to you, include request work, or deny automatic or scheduled promotions in grade. A 25 year developer? here has had a grievance on her various leaders, continuously for over 12 years. She is proud of it. “They don’t tell me what to do!”

- Write off leaving government. You can always leave right? Wrong. Would you hire someone from an environment that fosters the behavior above? With very few exceptions, spend more than a year or two as a “gov-e” and you can kiss the private sector good bye. Who would want this behavior and if it took you more than a year to figure out you should quit, that says volumes.

We often see people falling into this swamp. I think it happens to politicians, who often seem sincere when they enter office - and those close to them insist that they are. It doesn’t take long though, before their daily exposure to bureaucrats fixing chronic internally generated psychosocial stress gets to them and they become trapped in the same prescriptive, proceduralist, unimaginative state as the zombie they ousted. And so it goes on.

To conclude this roundup of thoughts triggered by the Grady Booch interview, I was struck by his talking about a dirty little secret. He sounds like a self-hating software engineer!

I have sometimes wondered if there isn’t actually a dirty little secret that is kept by members of groups suffering chronic stress addiction. I suggest that people in this state get a hit off stressful situations and behaviours. They are cognitively impaired and the calibre of debate within the group is poor as a result, but perhaps this provides opportunities to raise stress in a particularly unpleasant way. When a person acts in a manner contradictory to their conscience, this itself is stressful. Might it be that such people are tempted to knowingly engage in wrongful, stupid or damaging behaviours because they know they are performing wrongdoing? The story Pensioner, 81, ordered to remove flat cap as he has a quiet pint - because it’s a security risk is perhaps a mild example of such willful and perverse misinterpretation of guidelines. A kind of willful misinterpretation which is becoming all too common at the moment, as stress addiction becomes deeper and more widespread, and more once bright minds sink into the swamp.

All Postings, Stress Addiction, Programming

The Benefits of Low Level Lighting

Few people choose to fill their homes with bright fluorescent lighting. Perhaps in the kitchen or utility room where bright lighting is helpful for the kinds of things they do in those rooms, but not in the living room, study or bedroom, where they wish to feel comfortable, relaxed and secure.

On the pages Neuroscience and Implications for Software Engineers I describe how feeling comfortable, relaxed and secure - rather than stressed and anxious - enables us to use the faculties we need to be good at programming.

So logically, we should light programming shops in the same way that we light our studies at home. I believe that commercial organizations drift into doing exactly the wrong thing because raising stress and anxiety has an insidious and previously unsuspected allure. The neurochemical response to stress - raised dopamine and norepinephrine - is similar to the response to addictive drugs!

I’ve recently been working in a shop that’s good in several ways, and one of their good practices is low level lighting in several areas where programmers have their desks. Some of the old-timers have told me that the lighting is an accident of history, but I don’t care how it got that way. It’s still worth sharing!

It’s a bit tricky sharing a lighting level - darned modern cameras :-) So in the two photos below, notice how strong the illumination seems where the lighting shines directly on the walls. And remember I took them to make a point, not as works of art. So they’re “warts and all”, and therefore I’ve taken care to avoid catching anything identifiable in them. The areas used for greeting customers are much smarter:

Compare with these two shots of the kitchen area, where natural daylight can be seen coming through the windows:

The low level lighting brings several interacting benefits. The sense of cosiness, similar to my living room at home, is certainly there. Beyond that, there’s an improved sense of privacy. This is more of a perceptual illusion than a real effect, because we are all as visible to each other as in any open plan environment. But it feels more private, and that’s what matters for reducing stress and anxiety and so enabling juxtapositional thinking.

There’s also an effect similar to being in a library. People respect the sense of quiet and cosiness. They keep their voices down, and tend to go elsewhere to use their phones. There’s a nice kitchen they can go in for a chat. Not all noises are the same, and we know that noise containing intelligible data (even if it’s not intelligent) is the worst kind for breaking our concentration and preventing us seeing complex pictures. So if we must suffer open plan, anything which encourages people to keep their voices down is good. The office depicted in the photos above has a babble annoyance level which is nearly as good as a two person office. It’s remarkable. I would never have guessed this effect would be so pronounced if I hadn’t had the chance to observe it happening.

Then there’s eye strain. We all spend our days looking at screens, which emit their own light. They don’t need external illumination like pieces of paper do. In fact, the more environmental light there is, the harder we find it to see the screens. The blatant extreme case of that is glare when the angles are wrong, but flicker has a more serious effect on its victims, wearing them down over time. The flicker from the kind of cheapskate tubes found in many commercial shops is bad enough, but when it beats with displays which are refreshing at nearly the same frequency it can be (almost) visible to most people. There are plenty of people who think that computer displays emit noxious rays which give them headaches when actually it’s that beat frequency flickering away at the edge of their awareness that’s doing them in. Simply explain what’s happening to them and they stop being so annoyed by it - try it! Obviously, the less fluorescent light there is in an office, the less these effects will occur.

The whole flicker thing has another side to it. Many of the best programmers are the sort of people who are classified as being on the “autistic spectrum”. Now I’m very suspicious about such classifications. As I explain on the page Other Applications, I believe we can understand the name-calling as unconscious resentment by those trapped in the psychosocial stress/dopamine economy towards that don’t participate and exchange fixes. The greater sensitivity of the non-participants can be understood as normal sensitivity, not blunted by excess dopamine constantly sloshing around their brains. But whatever the cause, many of the best programmers can see the 50 or 60 Hz flicker of fluorescent lighting without any monitors beating away nearly in sync, and it’s extremely annoying to them. Reducing it improves their ability to concentrate. Simple.

The final benefit I’ll identify is economic. Many organizations operate acres of floor space, all lit with tubes. Take half of the tubes away and the lighting bill is cut in half.

I’ve given several good reasons for reducing the light levels of programming shops. Some readers may notice there’s an argument I’ve not made. There’s plenty of anecdotal accounts of full spectrum lighting improving the performance of schoolchildren. The trouble with this stuff is that it is very anecdotal - the more I look for substantial original research on this, the vaguer and more associated with interested parties it becomes. The remaining research doesn’t do a good job of excluding multiple sources of variance, or “confounds” as they’re called in the jargon. The report Full-spectrum fluorescent lighting: a review of its effects on physiology and health by McColl and Veitch, funded by the National Research Council of Canada does a good job of exploring the lack of solid evidence in this area. So although you might choose to experiment with full spectrum lighting, bear in mind that it is more expensive, and the evidence for its benefits is not yet strong.

Experimenting with reduced light levels is a different matter. It has a negative cost. So try it. Persuade your boss to take away some of the tubes (don’t make things too dark - make sure illumination is enough to be safe for example), and see if your working environment becomes cosier, more private, quieter and more relaxing. See if your ability to concentrate improves, and over a release cycle see if that has any effect on the accuracy of your estimates, the necessary sufficiency of your code, and your bug reports.

Neuroscience, All Postings, ADHD, Stress Addiction, Programming

Process Degradation and Ontological Drift

In the Introduction I describe how the cognitive effects of stress - particularly as studied in neuroscience labs and military contexts - degrade the very faculties we need to be good at programming. I also propose that since the neurochemical effects of stress - particularly the release of dopamine and norepenephrine - are so similar to the effects of addictive drugs, we might expect whole groups to become addicted to maintaining a locally comfortable level of psychosocial stress. Because the members of the group don’t realize they are stuck in an addictive loop and are experiencing both cognitive impairment and gratification of their addictive need, they have little chance of gaining insight and instead will tend to rationalize their stress-raising behaviours as inevitable or desirable, but won’t be able to say why.

No part of this is terribly radical, but taken together it certainly is. For example, it implies that to really understand what is going on in a workaday engineering situation, we have to look outside the context and think about the whole culture in an unusual way. Like the old puzzle about connecting 9 dots with 4 connected and unbroken lines:

We have to “think outside the box” - but we have to really expand our contextual awareness and not just say we are doing so. The trouble is, we need access to the very faculties that stress takes away to be able to think juxtapositionally and tell the difference!

So in this post I’m going to look at a real programming task, and talk about how people often think about such tasks. I’ll look at the opportunities to do a good and interesting job that we miss all too often, how our perception of work can change to become a mockery of itself and how process can amplify this change. I’ll also discuss how this change distorts our appreciation of our tools, and touch (just a little) on the profound social implications outside engineering work.

The Task

I recently had to apply a bug fix across a large codebase (around 1.1 MLOCS in the relevant part) implementing some complex business logic for several large telecoms customers. Some colleagues had successfully tracked down a problem which cropped up rarely, during an even rarer failover event in an underlying Sybase database. (A failover happens when one copy of a database becomes unavailable for any reason, so Sybase transparently switches to a hot backup.) Like so many others, the problem was rooted in the object-relational impedance mismatch. To summarize, the object and relational models don’t play well together. In this case we had some classes to handle the database operations and nicely encapsulate their state. The biggest lump of state was the result sets that some operations could return. Errors were handled by C++ exceptions, but when an exception was thrown, it could leave a result set waiting in the database connection encapsulated by the classes. It was the waiting result sets which blocked the failovers and prevented them performing as expected. The solution was to make sure that every operation - and every catch - purged any result set that might be waiting. Once understood, the fix was easily stated. The problem simply came from the number of cases which had to be considered.

Fortunately the code was well-layered. Database operations were rigourously contained within a specific set of classes, clearly separated from higher level business logic. Even better, they were all wrapped in SQLAPI++, which provides a (somewhat) platform independent collection of classes to wrap database operations. So every possible cause of problems could be found by looking for uses of the SQLAPI++ method SACommand::Execute(). So as we first understood the fix, I had to make sure that each such was embedded in a pattern:


         // Do stuff
catch(SAException &e)
         // Do stuff

Of course, it was important to get it right in every case. The edit would involve checking many cases, each with their own variations, spread over (I later determined) 133 different source files. This must be done without introducing more errors, including source code corruption caused by simple finger trouble while doing such a large task!

Perceiving The Task

Just as I was getting interested in the challenge of this source manipulation problem, someone made a comment recognizing that it was boring. For a moment I was amazed - it really hadn’t occurred to me to see it as a boring problem rather than a difficult one! I realized that the atypical perception was on my side, and thinking that observation through led to some interesting realizations.

Firstly, I’ve been doing this kind of thing for about 30 years now. I know that there’s no magic fix. No matter what technologies we use to help us work at convenient levels of abstraction and sem-automate the tasks, if we are going to own and maintain large and sophisticated programs, we will always come up against this kind of problem. Dealing with them effectively is an intellectual challenge for grownups.

Secondly, I’ve never been seduced by the rationalization of laziness that makes people speak of “mere code”. This is the idea that highly intelligent people are supposed to occupy themselves with grander things, and find “mere code” to be beneath them. I’ve always been suspicious of this. Code is the product. It doesn’t matter if your code undergoes mechanical translation, perhaps from some IDL or something to C, thence to assembly language and finally to opcodes. In that case the IDL is your product. Dismissing it as “mere code” and wandering off to do something else is like a cook who sneers at “mere food” and prefers to criticize the decor of the kitchen instead. It’s thoroughly misguided. Coping with large codebases requires skill, experience and a challenging mixture of rigour and ability to hold temporary partial hypotheses - and keep in mind which is which! It’s difficult, learned on the job, and cannot be performed in knee-jerk fashion after skimming “Codebases for Dummies” in a lunchbreak. The simple fact is, the presence of at least a proportion of people who have mastered these skills is necessary to the success of shops doing real, grownup work.

Thirdly, I’m well aware of the danger of boredom, and recognize it as one of the things which must be managed in order to do a good job - not just accepted as an unavoidable misery of life, leading to certain and inevitable failure. Effective attention can be maintained in two regimes, with a danger area in the middle:

It’s easy to maintain careful attention on an interesting task, and it’s also easy to perform a strictly mechanical task (like lawn mowing or ironing) particularly with an mp3 player available. It’s the tedious and repetitive tasks which require some attention, such that we cannot disengage, yet neither is there anything to keep us engaged, which are the problem. So faced with a task which initially looks like it sits up on the peak of the boredom curve, I know that the way to do it well is to decompose it into two tasks which are on the ends. That means investigating and thinking about it until I can structure the remaining parts in a very mechanical way. Then I can pick some matching music and get a rhythm going with my fingers!

Performing The Task

It was pretty easy to find all instances of string Execute in the source tree and examine them. I quickly found there were more instances of Execute than just calls to the method of class SACommand. On the other hand, my exhaustive examination of all Executes quickly showed that all the ones that mattered declared an SACommand in the same file. (I admit I’m simplifying here to keep the posting on topic - I have a point to make.)

It was then very easy to constrain my scans to the relevant files, by using a little shell one-liner:

$ vi `find . -print | while read fname
> do
> if test -f "$fname"
> then
> if grep SACommand "$fname | grep Execute > /dev/null
> then
> echo "$fname"
> fi
> fi
> done`

If you don’t know shell, that says:

Edit a list of files comprising:
List all files below current working directory. For each file:
If it’s a regular file (not a directory or anything else):
If the file contains SACommand and Execute:
Include it.

The thing is, if we are willing to use little shell one-liners like that, we can very easily customize searches of any complexity we need, as we need them. That’s why many of us stick with the “old fashioned” command line tools, even in this era of eye-candy encrusted Integrated Development Environments. Those tools can do some very clever things with a click of the mouse (and tools like Eclipse are very clever). But they are limited to the functionality imagined by their designers, rather than what the user needs at any moment. The teaching that A Jedi needs only his light saber contains a deep truth, even though it has come in for some ridicule in recent times:

(The two twerps in that video sound like they come from near my own home town. It’s not Nottingham, but maybe Mansfield. English English accents really are that fine-grained.)

You probably see where I’m going with this: I reckon surrendering to the inevitability of boredom (and so making errors) is part of a pattern which includes becoming dependent on (and so constrained by) the functionality available in prerolled tools. It’s about lowered aspirations, increased passivity and reactiveness. These perceptual and behavioural shifts are in keeping with the effects of stress, as I discuss on the page Neuroscience.

While I was writing this posting, I had an idea and tried a small experiment. I used the same one-liner, on the same codebase, but on Apple’s MacOs X instead of Sun’s Solaris. It gagged, complaining:

VIM - Vi IMproved 6.2 (2003 Jun 1, compiled Mar  1 2006 20:29:11)
Too many edit arguments: "-"
More info with: "vim -h"

I guess that’s why Sun can still sell stuff to grownups!

By the time I’d finished my exhaustive scan I’d divided all use cases into simple cases as above, and a couple of variant patterns where it was appropriate to put the result set clearing snippet elsewhere in an inheritance hierarchy containing the Execute. I got my corrected snippet ready on the clipboard (I was using a Windows XP box running PuTTY as a terminal), and I thought I was ready…


I quickly discovered a problem. There were some formatting problems which had crept into the codebase over time. The indentation wasn’t very rigourous, and even more annoying, there was a mixture of spaces and tabs used to create the indentation. This didn’t matter when editing a few lines, but the unpredictable editor behaviour when scaling to many lines pushed the job back into the error-prone bit of the boredom curve. I had to delay my edit and prepare the source files first.

That was a two stage process. First I fed all the relevant files through bcpp, the classic C++ beautifier. (Another little shell one-liner accomplished that.) Then I went through all the files to neaten them up. The beautifier is good, but it can’t make aesthetical judgements about stuff that should be lined up to improve readability - like this:

buffer << "Name: "    << name
       << " Quest: "  << quest
       << " Colour: " << colour;

Once those problems were out of the way the noise level was lowered, and I was able to crystalize my awareness of another problem that my subconscious had been screeching about, but I didn’t know what it was.

If the Execute() / isResultSet() / FetchNext() sequence threw an SAException, I still had to purge the result set. But doing that involved further calls to isResultSet() and FetchNext(), which could themselves throw SAExceptions - and probably would if any part of the database connection was confused! It was important to surround the purge within the catch{} in it’s own try{} block and catch{} these SAExceptions too, or the uncaught exception could cause the server to crash ungracefully!

This demonstrates the importance of reducing the noise level to the point where we can hear the voice of our own experience - and why Paul Graham says most commercial programmers never reach the state of awareness where they can do good work.

So far the task had taken about 5 working days - and I hadn’t made a single keystroke that actually implemented the edit I needed to perform. But then I did the edit, as an uninterrupted mechanical process which took less than a day - and I got it right.

The Code Review

How do I know I got it right? After all, the problem is usually not so much being correct, as it is being able to prove we are correct. In this case the biggest potential source of error was the manual straightening up process after the automated beautification. There were plenty of opportunities for mis-edits - unintended modifications - in there. Usually the CVS diff command will find the edits, but in this case the beautification made the diff too big to handle.

Fortunately I’d pulled a second copy of the source tree before I started. (CVS diff compares with the version the local copies may have diverged from, but pulling the a separate copy of the right version of everything would be tricky.) The CVS diff could at least give me a list of the files I’d touched, if I used grep to find all the lines that started with the string RCS, and edited them to produce a simple list of files.

I reckoned it would be much easier if I could consolidate all the files into a single pair of directories - before and after. That would be possible if the file names were all different (ignoring the directory names). I could check that easily by using each line of the file list as an argument of basename and doing a unique sort of the list:

$ cat CvsDiffFileList.txt | while read fname
> do
> basename "$fname"
> done > BaseNames.txt
$ wc -l BaseNames.txt
$ sort -u BaseNames.txt > UniqueNames.txt
$ wc -l UniqueNames.txt

Oh dear. There was a name collision. Let’s find out about it:

$ diff BaseNames.txt UniqueNames.txt
< main.cpp
$ grep main CvsDiffFileList.txt

OK. The name collisions were just mains for running test stubs. I didn’t care about them, so I deleted them from the file list, and knew I could copy the before and after versions of all the touched files into a pair of directories. So I did that with another little one-liner:

$ mkdir Before
$ mkdir After
$ cat CvsDiffFileList.txt | while read fname
> do
> cp src/"$fname" Before
> cp src/"$fname" After
> done

Then I removed all the tabs and spaces from both sets of copies. That removed any differences caused by changing tabs to spaces, the indentation, or differences in if(x) compared to if ( x ) style, so the remaining differences would be my substantial edits or finger trouble:

$ cd StrippedBefore
$ for fname in *
> do
> cat "$fname" | tr -d " " | tr -d "t" > fred
> mv fred "$fname"
> done

The result was 15,000 lines of diff - much better than the 100,000 I started with, and mainly consisting of blank lines and easily recognizable patterns of intentional edits. I then looked through the diff to make sure it was possible, checking for unmatched business logic type lines, before asking two colleagues to do likewise. A much simpler process also enabled us to review the intentional changes by looking for the Executes, like I had done originally.

Process Degradation

There’s a pattern to what I did, and we’ve all learned about it ad nauseum. It’s W. Edwards Deming’s Plan-Do-Check-Act Cycle. However, there’s a major difference between what I just described and the way “process” is normally represented. The difference is ownership.

I believe in the Deming Cycle. It works. Through reflection we become aware of what we are trying to achieve. We ask how we can do this as reliably and efficiently as possible. We form a plan, and when we do what the plan indicates we ask if it’s working or not, and correct as necessary. We find a way to check our work as an integral part of the cycle (like I pulled that second copy before I started), and when we’ve finished we act on what we’ve learned. In this case I realized the whole process formed a wonderful model of something deeper that I wanted to discuss.

Unfortunately the Deming Cycle is rarely applied as I did in the tale above. There, I was responsible for each step, I corrected my process as necessary, and I even discovered that I had to recurse, performing a sub-process within my outer Do stage.

Imagine how such a process would degrade under the cognitive distortions due to stress, described on the page Implications for Software Engineers. People under stress:

  • Screen out peripheral stimuli.
  • Make decisions based on heuristics.
  • Suffer from performance rigidity or narrow thinking.
  • Lose their ability to analyze complicated situations and manipulate information.

I was analyzing (reasonably) complicated situations, and using my shell like a kind of mental exoskeleton to manipulate information all the time! If I couldn’t do that, how could I know what to do? I’d have to become a robot myself, and execute a program that I’d got from somewhere else. If that’s not making decisions based on heuristics I don’t know what is! Of course, the motions I’d be going through wouldn’t be very effective, but I wouldn’t be able to compare my performance with a gold standard that I might imagine for myself based on the question, “How do I know?”, because of my narrow thinking. I wouldn’t even notice my mistakes, because I’d be screening out peripheral stimuli.

I’d be lost. Instead of being able to rely on my own capacity to operate a Deming Cycle and move forward with confidence, I’d be “piggy in the middle” - responsible for my failures, yet ill-equipped to avoid them. It would all be very stressful. I would obviously be keen to embrace a culture which says that results don’t matter, only compliance. Then when the failure occurred, I could pass the buck. Of course, I would always have to be on the lookout for others trying to pass their bucks to me, and I would have to be clever in my evasiveness and avoidance of responsibility.

The processes I could use would not be ones I cooked up myself, always following the meta-process of the Deming Cycle. Instead they would be behavioural prescriptions collected into shelfware, and micropoliced in a petty and point-missing stressful fashion, as my non-compliances were smelled out. All the stress would lock me into a cognitively reduced state, and if I got hooked on the stress chemistry, the raised dopamine and norepinephrine, the whole business would start to develop a self-evident rightness. If I suffered this decline in the company of others, we’d develop language and a local mythology to celebrate our dysfunctional state!

All this, of course, is exactly what we see as Quality programs degrade into reactive pettiness and individuals lose their problem solving abilities. Deming’s big thing was devolving process improvement to the people nearest to the work, but in the degraded version process is handed down from the organization. We have an entire belief system predicated on the belief that:

The quality of a system is highly influenced by the quality of the process used to acquire, develop, and maintain it.

Rather than the skills, imagination and dedication of the people applying it!

So we must always be very careful of process. When applied with knowledge and understanding, we get the growth of awareness, efficiency and personal fulfillment that some practitioners of the Deming method report. But with a supervening cultural tendency to stress addiction always in play, it can easily degrade into the exact opposite of what Deming intended. A spiral of decline where individuals become progressively less capable and empowered while the process becomes more externalized and disconnected from the work at hand.

Ontological Drift

After a while, groups suffering from a spiral of psychosocial stress start to behave in very peculiar ways. Being unable to trust their own good senses to even select a useful script, but unable to proceed without one, they resort to grabbing any old script and following it, no matter how silly the result. The deeper they get into trouble, the worse their problems become.

This way of looking at things provides a simple explanation for some very odd things. Consider the hilarious BBC story of the man who was told to remove his shirt at Heathrow because it bore a cartoon image of a Transformer which has a stylized gun for an arm - Gun T-shirt ‘was a security risk’:

A man wearing a T-shirt depicting a cartoon character holding a gun was stopped from boarding a flight by the security at Heathrow’s Terminal 5.

Brad Jayakody, from Bayswater, central London, said he was “stumped” at the objection to his Transformers T-shirt.

Mr Jayakody said he had to change before boarding as security officers objected to the gun, held by the cartoon character.

Airport operator BAA said it was investigating the incident.

Mr Jayakody said the incident happened a few weeks ago, when he was challenged by an official during a pre-flight security check.

“He says, ‘we won’t be able to let you through because your T-shirt has got a gun on it’,” Mr Jayakody said.

“I was like, ‘What are you talking about?’.

“[The official’s] supervisor comes over and goes ’sorry we can’t let you through and you’ve a gun on your T-shirt’,” he said.

Mr Jayakody said he had to strip and change his T-shirt there before he was allowed to board his flight.

“I was just looking for someone with a bit of common sense,” he said.

That’s stress addiction induced ontological drift for you. Of course, it isn’t so funny when whole cultures go that way. “Mr. K, the committee will see you now…”

Neuroscience, All Postings, Stress Addiction, Programming

A First Haskell Experience

I’m looking at Haskell as a production language, because for the foreseeable future we can expect processors to get more cores rather than more GHz, and this means we have to be able to parallelize our programs much more easily than we’ve done up to now. It was the sight of a Teraflops the size and shape of a pizza that set me off, but since I’ve been playing Intel announced a 6 core Xeon. Great! I’ll be able to have 5 idle cores instead of 1!

I don’t see the practicalities of production programming as either equivalent or orthogonal to the computer science virtues of a language - instead they are “the next question”. I don’t even believe the production practicalities are necessarily inherent to the properties of a language, although obviously designers like Thompson, Richie and Stroustrup have achieved great things by being production programmers themselves, and bearing the practicalities in mind. I do think we tend to forget how much our habits and conventional practices bring order to chaos, at least until we have to move to a new language and have to rebuild that cultural strength-in-depth. That’s why I decided the thing to do is dive in and see what happens.

I’m a big fan of wxWidgets, a cross-platform GUI toolkit which is a pleasure to program, produces native look and feel on Windows, Linux and MacOs, and has a liberal licence permitting use in commercial code while being completely free itself. So I was delighted to discover wxHaskell, which wraps wxWidgets for Haskell. This gave me the possibility of translating the EftAnalyzer.cpp program (for analyzing the EFT results) into Haskell. You can see the output of the program in the posts More EFT Results and This Is Your EFT On Drugs. The original EftAnalyzer.cpp isn’t anything special - it was really just a lash-up for doing one job, but it’s complex enough to make a non-trivial comparison.

This post records my first thoughts, and offers a C++ to Haskell translation which (hopefully) will be of practical help to others who want to get into Haskell.

Code and Data

The C++ and Haskell code, plus some test data are available. In case I enhance EftAnalyzer.cpp in the future, I copied it to EftExperiment.cpp before I started. The equivalent Haskell program is EftExperiment.hs. A suitable data file, exactly as I downloaded it from the WordPress MySql database behind this blog, is eft27jan2008.txt.

The Newbie’s Lament

As I described in post Haskell Needs A Four Calendar Cafe, there is a barrier to entering the Haskell world caused by a lack of practical examples of simple, working programs. By working programs I mean ones which do some basic error handling and practical examples of basic programming steps. I spent several days trying to open and read a file, and also had grave difficulty seeing how to step through an input file which described a single record using multiple text lines, before Cale Gibbard on the haskell-cafe mailing list gave me a wonderfully clear and concise couple of hints. Here are links to Cale’s postings describing how to open a file with error checking and merge records. I used both techniques described by Cale.

Where does this barrier come from? Perhaps in part it’s because Haskell has been a small community of functional enthusiasts until recently, with newbies arriving gradually and perhaps having physical proximity to old hands so that the basics don’t tend to get spelled out. And the problem does seem to involve basics rather than more sophisticated stuff. Once I was over those initial hurdles I didn’t need any more help. Google and grep were all I needed.

There also seems to be a noise problem. This may have been caused by people who haven’t encountered the functional paradigm before trying to get their heads round it by writing not-very-helpful on-line tutorials, which unfortunately confuse more than they illuminate. That’s why I haven’t done this, but instead simply link to a commented Haskell program and a C++ equivalent, with the same names for the same things where possible! For example, it is certainly true that in functional programming recursion takes the places of iteration in imperative languages, and we can find many examples of this in the dreaded tutorials. But as Cale explained,

Using explicit recursion is generally a means of last resort.
Higher order functions on data structures are our control structures.

Where in C++ we might say:

for(int x = 0; x < 10; x++) func(x);

in Haskell we would say:

map func (take (10 [0..]))

Because we don’t see that kind of thing, we can’t play “monkey see, monkey do” until we start to get it. Now there are an awful lot of teams out there that have a mix of some theory, some practice and some enthusiasm, but not in the depth we might once have found - more like the crew of the Black Pearl after a summer of JavaSchool than the Knights of the Lambda Calculus.

I don’t think I’m the only person who has noticed this barrier to entry. If Haskell is going to triumph, it needs to be lowered. Perhaps the forthcoming Real World Haskell book will help. Ohh arrgh me hearties.

Culture, Community and Scholarship

On the other hand, there doesn’t seem to be anything deliberate about the barrier. On the contrary, the Haskell community seems to be particularly friendly, helpful and knowledgable. There are a lot of things about Haskell which are interesting and exciting and the mood of the culture is one of them. It reminds me of the Unix User Groups of the early 1980s. UNIX culture was - and is - hugely important. A bottom up community with strength in depth (despite the Death Star vs. Berzerkely Great Schism, ps -ef vs. ps -aux) that corporately sponsored cultures have never been able to duplicate. I’m thinking of JavaLand as much as anything .NET. See Eric S. Raymond’s The Art of UNIX Programming for a celebration.

There’s certainly something there for newbies to be proud of joining.


I’m still having problems with the active whitespace, indentation thing. Again, I’m not the only person to have this problem, as this message indicates. I was stunned to discover that one compiler error was coming from a comment that wasn’t indented correctly with respect to the function it was within! This may be a part of the barrier to entry, an effect of most newbies to date having good access to old hands, and picking up an oral tradition that isn’t recorded.

An interesting comparison occurred to me as I was adding and removing whitespace, often at random in my desperation. If someone took a file of Java or C++ laid out in rigorous K&R or Allman brace style, removed the braces and gave it to me, I could probably put the braces back. So why is Haskell indentation so darned hard? The existence of an oral tradition can explain why can’t I find a simple description in words of what GHC actually does, but why does it do complicated things in the first place? Certainly “maintain the same level of indentation” is not a complete theory.

Program Length and Width

Functional languages including Haskell are famously more terse than imperative languages, and indeed in my statistical processing this is true. Yet the C++ version is 1551 lines, the Haskell version 1296 lines. This is not because the Haskell version is more heavily commented. I removed the comments from both, and found that the C++ version was 1152 lines, Haskell 956 lines. The reason for this similarity of length is that I’ve got a lot of marching legions of code. Phalanxes of lines that do the same thing for many text controls, strings to match, windows and so on. These are the same, and ultimately have to be mentioned the same number of times, in both programs. For this reason the programs are of similar length. Marching legions are common in production commercial code, so we can expect total line counts to be similar in such use cases, even if xmonad implements an X window manager in 500 lines of code.

That said, the syntax of Haskell makes the marching legions more parade ground than is possible in C++, with less boilerplate in the way, which means that the complexity is still significantly reduced, and we are more likely to be able to spot the bugs from 6 feet away. Pattern matching arguments are a huge bonus, eliminating much horrid nesting with tricksey curly braces scattered everywhere. I really like this and I’m sure it will be of enormous benefit in production shops.

I’ve finally been obliged to abandon my decades long love of the 80 character line. As displays have gone graphical and gained in resolution and physical width, as printers have disappeared from my working life, I’ve always stuck to the 80 character rule. Perhaps it’s because of my failure to understand Haskell indentation, but I find my code flying off to the right all too often. I increased my limit to 100 characters but it still looks like a broadside from the aforesaid Black Pearl, flying rightwards and then dropping. This doesn’t impact readability because layout does convey useful information when parsing by eyeball, but it still feels sinful. Maybe I’m just being silly about this. GNU a2ps(1) has a style sheet for Haskell so I can always pretty-print it in landscape if I must.

Functional Paradigm

While I had an initial problem with idioms, shifting to the functional paradigm was not a problem for me. I’ve played with Scheme before, but not for many years, and I enjoy using m4(1) for generating documentation, which also might have helped a little but is hardly an exhaustive functional primer. In particular I was struck by how easy it was to convert the inheritance based C++ program to function calls. Unless a person has never used anything except an OO/imperative language I don’t see why they should have a problem - and everyone has to write shell scripts and stuff don’t they? They do in production shops anyway!

The trickiest bit in the translation was interesting in itself. On some of the graphs I look at the maximum value to be plotted on the Y axis and select a suitable scale and distance between the marks from a list of options that I reckon are aesthetically pleasing. In the C++ version I do a simple search and just return silently if I can’t find an option that’s big enough. Haskell does not allow such fudging. The syntax is fudge unfriendly. So in the Haskell version I had to think carefully about what I really wanted. Which is good.

The higher level of Haskell actually made the statistical processing simpler. When I calculated the standard deviations for some graphs I used C++ arrays, but later I found that in other calculations arrays wouldn’t do and I had to use STL vectors instead. Then it was a hassle to go back and convert the arrays to vectors so I didn’t get around to it, and this prevented me easily refactoring to remove redundancy. In Haskell a list is a list so I never got into a situation with two representations of the same concept. The use of higher order map also simplified the statistical calculations - I could dispense with all the outer loops.

There may be a problem with the perception of productivity in programming shops. There are an awful lot of cases where people don’t really get to the essence of the problems they must solve (because they can’t access juxtaposition thinking as I describe in the Introduction), and to avoid worrying about it they busy themselves with displacement activities instead. In Haskell the need for busy-work is clearly reduced which is very good for real productivity, but may cause problems for organizations which emphasize looking busy. Hmmm… Perhaps accessing juxtapositional thinking is a precondition for functional programming. If so, the development of multi-core processors is going to have even more interesting effects than we might otherwise suppose.


Laziness both fascinates and disturbs me. To date I’ve been really bad at writing tail recursions correctly and find that with sufficient inputs or processing I easily get stack overflows. I understand the benefits for simplification of algorithms and potential for optimizations - that’s the fascinating bit. I’m not troubled by relinquishing power over operations to the runtime - after all that’s something I always try to maximize when using SQL. The more I can do in one query, inside the engine, the better. If I have to slosh result sets around, filtering them client side, I know I’m squandering cycles like crazy.

The difference is that when I give some SQL to the engine I know the optimizer will do its best for me. I’m not in danger of writing SQL which will become pathological without realizing it. Now in practice I suppose I can detect pathological cases by using lots of test data, such that I’m confident I’ll get a stack overflow if I’ve got it wrong, but I still have to correct the problem. And that’s something I still find hard, even with trivial examples. Also, I might find that I’ve designed an algorithm that overall isn’t tail recursive, but only make this discovery late on, so wasting a lot of work.

Are there working practices that stop us growing the stack when we intend to tail recurse1, like the golden rule for avoiding deadlock, which says we must always acquire locks in a given order, even if it means relinquishing some already held locks, only to re-acquire them in due course? Is it just practice? The barrier to entry I discuss above is an issue, but I’m sure it’s fixable. I’m not so certain that avoiding pathological cases involving laziness is as easy.

1) I know I know. Good English would be “tail recur”. Let’s say it’s like “computer program” and “theatre programme”. Which itself has the additional provision that if you can’t spell “theatre” you aren’t expected to be able to spell “programme” either :-)

Code Organization

Java code organization irritates me. A file for every class. Programming which requires IDE support. The horrors that can result. I once saw nearly 2 million lines of code in one great, entangled wodge. No UML, no categorization, nothing. I had to use genetic annealing to even propose a division of classes into continent natural categories which we could aspire to move towards. Eclipse is an amazing thing, itself a testament to what’s possible in Java, but I could not help but be aware that without it, the team would never have been able to get so deeply into trouble.

What does good code organization look like in Haskell? I don’t know. I’ve not yet seen any obvious conventional practices, so perhaps they are yet to evolve.

I have been playing with it, and in my first non-trivial program I discovered a few things which seem to be good ideas. Pure computation involves small functions, usually < 10 lines. Several of these with close coupling can be gathered together, with a commentary describing the group at the top. Building from lower level or early (perhaps input parsing) towards higher level or late (processing or output) is good. Do-ey, sequential stuff is better commented in-line.


They say that if it compiles, it will run correctly. It’s nearly true! I’m amazed. One bug I had involved the positioning of the “Chill Points” calibration on the “Stressors By Average Score” graph. I had one line specifying [-3..], where another line assumed it said [0..] and subtracted 3. So the calibration was shifted leftwards past the origin. That wasn’t hard to find.

Such buglessness will remove a huge source of indeterminism in production environments where the work of many teams is co-ordinated by schedules.



Yes. It’s fun. At least, it was once I got going. Until then, it was an immensely frustrating experience and I’d have given up if I wasn’t conducting an experiment to see how accessible and useful Haskell really is. Because of my objective, frustration was a result in itself so I was able to keep my morale up enough to make progress. Had I been experimenting on the weekend on general principles, which is how we usually do these things, I’m sure I’d have given up in annoyance and frustration.

Functional programming is harder work because there’s less blathering, more focus on what needs to be done, and the language itself discourages fudging. Anyone who enjoys programming will find themselves enjoying Haskell, except for the qualification given above. Anyone who’s been using Java will probably realize they haven’t had fun for quite some time. (People who enjoy Snoopy talking dirty have a different definition of %#@*&^% fun.)


It’s too early to have Conclusions, so I can only have Initials. Haskell will probably be a super production language, offering significant improvements in robustness, schedule predictability, readability (and so maintainability) and recruitment over the imperative languages available today. Plus a better chance of natural parallelization predicted by theory, although I’ve not got far enough to try that yet.

However, the barrier to entry issue in particular means that today, Haskell will be a hard sell on most production sites. Real examples - toys but with all the indentation, error handling, input and output and so on in place and commented - would go a long way towards solving this problem.

Having got past the barrier enough to write a non-trivial program, I’m certainly going to press on.

Cautionary Tale: The BBC B ULA

As I’ve been looking at this stuff, a worry has been nagging away at the back of my mind. It’s not about Haskell, but about those multi-core processors. They all incorporate switching logic to stop the processors when they are not in use. This is particularly emphasized for the amazing Intel 80 core Teraflops processor, and it’s phrased in term of energy saving. The thing is, if there are 80 processors going on one chip (even if it’s a pizza sized chip) it’s going to get very hot. Perhaps the motivation for the power switching is more to do with controlling heat than saving power, which begs the question: How much of the time do their designers assume these chips will be working? If we beat the parallelization problem, will we cook the chips?

It’s happened before. 25 years ago Acorn made the BBC B, a 6502 based 8 bit machine. The video was handled by a ULA which was made (I think) by Ferranti. The ULA designers had assumed that no-one would ever use more than half the gates in the ULA, with the other half being blown away to create the logic that was required. The clever Acorn people had managed to use 90% of the gates in the ULA, squeezing astonishing performance out of it. As a result of all the switching, early releases of the machine tended to overheat and the video (which went through a UHF modulator and worked a normal TV) went snowy. It’s incredible to think of it today, but to diagnose the problem we used a handy product found on all electronics benches. An aerosol can which contained CFC propellant and nothing else. A quick squirt would cool the chip, the video would lose the snow, and we’d have confirmed the problem!

Let’s hope the clever Intel people don’t make the same mistake as the Ferranti people did all those years ago. If you build the gates, we will find a way to use them!

All Postings, Programming

Haskell Needs A Four Calendar Cafe

There’s trouble on the way, concerning the commercial application of massive parallelism. I first became aware of this in the early to mid 1990s, when firms like Sequent were making SMP machines with tens of “cores” (as we’d call them today) and people were also getting interested in making use of all the cycles going to waste in networked workstations, especially at night. Today we have Beowulfs, BOINC and Sun’s Grid.

The trouble with parallelism is that it’s hard. There are some problems that are embarrassingly parallel - they fall apart into readily parallelizable subproblems which can easily be farmed out. But not all problems are like this. Some progress is possible by being very, very clever at the OS level of SMP machines - much of the SCO FUD antics in the recent legal cases revolved around this. To an extent it’s possible to conceal parallelism from the user (a programmer in this case) while still getting the benefit.

In general though, to get the benefit of parallelism we have to explicitly think about how we can make our algorithms parallel and code them accordingly. That’s well tricky, requiring in-depth contemplation of the structure of the problem. The results are also unstable. We can design efficient parallel algorithms with plenty of skull-sweat on the part of a dedicated and cunning person, but then a small change to the requirement can invalidate the entire design, meaning that the skull-sweating has to start all over again.

Notice that the presence of a parallelizing framework (like BOINC) does nothing to solve this problem. A framework can help us to dispatch and harvest jobs once they have been designed, but it cannot help us design them.

About 10 years ago these problems combined with the stress based descent into busy-work displacement activities that I have described to produce a truly bizarre result. At the time massively parallel applications were considered “big iron” stuff, suitable for farming out to Facilities Management or body shop outfits, which didn’t have a clue how to go about solving research problems. Because they were unable to address the real problems, they started “following the procedure”. Each element of a simulation (a car, a soldier, sometimes even a molecule) were represented by objects. These objects sent messages to other objects, usually via CORBA or some other unknowable and bloated horror. The “requirement” was then transliterated into sets of messages, and the whole thing was set to run.

These parallel applications usually ran two or three orders of magnitude too slowly to be any use to anyone. The lifecycle problem never emerged, because they took so long to write that they rarely completed one cycle (usually at a cost of millions). Wretched, miserable failure.

More and more bloated “middleware” can dispose of the available cycles for us, but not in a productive way. It’s sobering to realize that modern multi-core servers running Enterprise Java Beans to deliver web apps to browsers often underperform early CICS talking to IBM 3270s 30 years ago.

Until recently, this problem - that we do not know how to write parallel apps in an efficient and repeatable way - has been a sleeping dragon. Some of us have seen how bad it gets, but the problem hasn’t come up so often so we don’t worry about it. But there are good physical reasons why processors are getting more cores rather than more GHz these days, so the problem will soon become acute. Here’s a video clip that had me shouting “It is later than you think!” and similar Toynbeeisms. An Intel employee is holding a Teraflops, spread over 80 cores, in his hands:

Like David Bowie sang, “Five years, that’s all we’ve got.” Then we’re going to look like total plonkers who can’t make use of massively parallel resources. We urgently need a new departure in production programming techniques, and I reckon functional programming languages have to be the starting point. Functional programming helps by encouraging us to naturally decompose problems into chunks that can be parallelized, because they retain partial computations until the results are needed. Such partial computations or “thunks” have the possibility to be executed in parallel.

Now I’m not saying that functional languages are a panacea which will make the problems go away. Rather, they are a place to start learning how to make use of such amazing resources, which offer a snowflake’s chance of success. And in production environments, today we have nothing. Having a worker thread in your MFC app is nothing to be proud of - think how hard it was to get it right, the locking, race condition and harvesting issues you had to deal with. There’s a core on my MacBook that hardly gets used at all. So this is an exciting time - there’s work to be done, problems to solve!

So I started looking at the options. Lisp and Scheme - which Lisp and Scheme? There are so many… OCaml is popular today, but to me (and I’m not an expert) it seemed too Java-ish. I’m looking for something that will kick dirt in the face of C, running on SMP machines. As a language for preparing for the future F# is dripping with proprietary evil. So Haskell. GHC is awesome, runs everywhere, and seems to produce compiled programs that take about twice as long as the same thing written in C. (That’s OK - it’s less than one Moore cycle and it takes most organizations over a year to decide to do something.) It’s obviously very powerful, has an intelligent and active community, and issues of parallelization and concurrency are under active investigation by said community. Chapter 24 of Beautiful Code left me suitably impressed. So I decided I had to learn Haskell.

Oh dear. I’ve not blogged for a while, have I? There’s a problem learning Haskell. In part I think it comes from the very richness and flexibility of the language. In Surely You’re Joking, Mr. Feynman! Richard Feynman describes a difficulty he had when he was first learning to draw:

I noticed that the teacher didn’t tell people much (the only thing he told me was my picture was too small on the page). Instead, he tried to inspire us to experiment with new approaches. I thought of how we teach physics: We have so many techniques — so many mathematical methods — that we never stop telling the students how to do things. On the other hand, the drawing teacher is afraid to tell you anything. If your lines are very heavy, the teacher can’t say, “Your lines are too heavy,” because some artist has figured out a way of making great pictures using heavy lines. The teacher doesn’t want to push you in some particular direction.

People who already know Haskell seem to be in the same state as Feynman’s art teacher. Rather than giving an example of how to open a file, they prefer to enthuse about what they call their “monad complexity”. The innocent newcomer can trawl through reams of this stuff, never discover how to open a file, but come away convinced that it is very, very hard. I’m not the only person who has formed this wrong impression. A recent thread on Reddit Why don’t you use Haskell reflected this:

I think this exactly hits the nail on the head. Haskell is a beautiful language, and I enjoy working with it just for the mind stretch that it gives, but the conceptual overhead and cost of entry is high — it’s got a very steep learning curve at the beginning. By comparison, the “hot” languages right now (Python, Ruby) are all much friendlier to someone new.


Haskell itself appears to be overly complicated in many ways, though I admit I have no real experience with the language.


Because after I learned quite a bit, enjoying the mind-bending in monads and the type system, I looked at the code for a real-world but very simple problem (a front-end for a shell mp3 player), and it was far harder to read and far less elegant than all the beautiful example code.

I should say that I’m quite confident that the above comment is not correct. I just wrote a little real world program in Haskell, and it’s way clearer than the C++ equivalent. However, I based it on a very, very helpful insight I got from the haskell-cafe mailing list. Until then, my attempt to write a real world program based on all the “monad complexity” stuff stalled on trying to open the input file.

Because it’s a hard language to work in. It’s by no means as hard as people make it out to be, and many times it’s easier than the competition, but I find that more often than not I have to work very hard to do basic things in Haskell.

There are plenty of other reasons on that thread, but most of them I’m dismissing for my purposes. Of course other people at work don’t use it at the moment. Other people at work won’t be able to make use of a Teraflops the size and shape of a pizza either. Shifting to functional thinking is an intellectual challenge, but so is trying to do massively parallel complex non-linear ray tracing work in C++. In fact it’s so complex it’s doomed. It’s the barrier to entry caused by the perceived complexity of basic tasks that is the big problem. What’s missing is the Haskell equivalent of:

#include <stdio.h>

int main(int argc, char **argv)
   int c;

   while((c = getchar()) != EOF)

I could bang on about that for hours, and make you think you have to be Alan Turing to understand it. Or I could just show the code.

Another metaphor for the current barrier to entry and its opportunity cost occurred to me as I was struggling with my program. Perhaps you will doubt my interpretation of history, but if so, please try to see my point!

In the 1980s there was a Scottish band with a cult following called the Cocteau Twins. (They weren’t twins and they weren’t called Cocteau.) Their music was a unique conception of sound, and while it worked for some people, most couldn’t get their heads round it. There was a whole secondary poetry that grew up around people guessing what the words might be! Here’s Heaven or Las Vegas - mature Cocteaus from about 1990:

Then in 1993 they released an albumn called Four Calendar Cafe. I reckon it will be years before the albumn’s importance will be fully appreciated. It did for the audience what The Velvet Underground & Nico did for other musicians in its day. It educated us. Here’s Evangeline, from Four Calendar Cafe:

Once we had been educated, their whole back catalogue became accessible, unleashing a tsunami of richness and subtlety on mid-1990s Britain. The old cult fans weren’t entirely impressed. The Wikipedia entry remembers,

The band’s seventh LP, Four-Calendar Café, was released in late 1993. It was a departure from the heavily-processed, complex and layered sounds of Blue Bell Knoll and Heaven or Las Vegas, featuring clearer and more minimalistic arrangements. This, along with the record’s unusually comprehensible lyrics, led to mixed reviews for the album: Some critics accused the group of selling out and producing an ‘accessible album,’ while others praised the new direction as a felicitous development worthy of comparison with Heaven or Las Vegas.

The Cocteaus suddenly raised the bar, and in doing so they created a space where others could do equally interesting and challenging things, and also be understood. There was a new energy building in Bristol, and we got a new, complex inner-city sound. A strange sound, where a line between Portishead and Willam Blake passes through Lee Marvin:

Massive Attack also came from Bristol. Time was, whenever you had half a dozen people and a surface you’d get a Karmacoma:

Sarah McLachlan suddenly went big too, after years of solid work, and unlike Tori Amos she never needed housing up. Remember this?

Is it really fair to say the Cocteaus made a space where these things were possible, given that the people collectively tagged the Bristol Sound denied there was any such movement? To support my argument, consider the simpatico when the Cocteaus’ Elizabeth Fraser guested on Massive Attack’s Teardrop:

None of this in any way detracted from the high aspirations that the Cocteaus had always followed. In fact it became more powerful - the enabling technology which allows Elf song to run without a hypervisor on Klingon speaking brains (don’t worry - this is not a spoiler. As Feynman said, knowledge only adds, never subtracts):

I do like YouTube. It’s ever so handy for those of us who tend to think in tunes!

My point is that the functional programming world, and Haskell in particular, is ready to and must become the dominant force in programming, like OO helped us tackle the complexity problems of 20 years ago. Everything is ready to go. But the current barrier to entry is preventing the necessary growth. The barrier is nothing to do with “monad complexity”. Most people who use the C++ STL don’t really understand the complex template metaprogramming cleverness that’s going on behind the scenes, but they can look at some examples and see how to do it. We don’t need more tutorials that bang on about “monad complexity”. We need a small corpus of simple example programs to show us how the various features and idioms fit together. Not every possible way, just the ways that working programmers might use the language to address common real world problems.

In my next post I’ll describe my attempts to translate the EftAnalyzer.cpp program using wxWidgets, to Haskell and wxHaskell, to produce one such program.

All Postings, Programming

Ah… Love ‘Em!

On the pages Why No-One’s Noticed This Before, The Dreaded Jungian Backlash and Logical Effects I describe how the effects of spending too much time in focussed attention conspire to conceal themselves. This includes the simple truth that we don’t notice what we don’t notice, our culture’s accomodation of focussed attention by leading us to describe our work in reactive, prescriptive terms, and reduced expectations of - and reliance on - self-consistency. Most strangely I argue that groups of people trapped in focussed attention are liable to escape contradiction when things go wrong by deluding themselves. All of this fits with studies of groups under stress. We know that stress makes groups more reliant on heuristics, less analytic, less aware of context. There are elegant “choice blindness” and “conformity” experiments which reveal the surprising way that ordinary people will backfill their own memories with fantasy.

It is not my purpose to dwell on these issues. What matters is that it can happen, it’s a spiral of decline as each fantasy leads to another, and it always reduces efficiency and quality. We need to be aware this kind of thing can happen, and ensure we maintain conditions to prevent it. It’s like finding rotten timbers - the exact growth pattern doesn’t matter, we just need to get rid of it. Confusion can reach a point where we need to rule off, take a deep breath and start again. Over-analyzing each mess is of little value - unless we are studying messes!

To demonstrate how crazy things can get and provide a little entertainment, I’m going to analyze a story that’s been in the news this week. It makes no sense at all until we think about it in these terms, and then what is happening is crystal clear. Do remember that this is a good textbook example - and textbook examples are inherently misleading. Real life is rarely this clear cut. Anyway, on with the yarn…

It comes from the Chicago Sun-Times, a gem entitled Voters are told pen had ‘invisible ink’. Ahem…

Voters are told pen had ‘invisible ink’

BY ANNIE SWEENEY Staff Reporter/asweeney@suntimes.com

When it comes to election shenanigans, Chicago has been accused of just about everything.

But invisible ink?

Twenty voters at a Far North Side precinct who found their ink pens not working were told by election judges not to worry.

It’s invisible ink, officials said. The scanner will count it.

But their votes weren’t recorded after all.

“Part of me was thinking it does sound stupid enough to be true,” said Amy Carlton, who had serious doubts but went ahead and voted anyway.

As it turns out, Carlton was one of 20 voters at the precinct who were given the wrong pen to use. They were also then told, apparently by a misinformed judge, that the pens have invisible ink, elections officials said.

As a result, the votes were not counted. But officials insisted there were no dirty tricks involved.

“This one defies logic,” said Jim Allen, a spokesman for the Chicago Board of Elections. “You try to anticipate everything. But certain things just … they go beyond any kind of planning you can perform.”

By late afternoon, five voters had been contacted and told to come back to the polling place to vote again. And elections staff had left messages at the homes of the rest, Allen said.

Carlton and Angela Burkhardt, another voter who was told the same invisible ink story, spent a good part of the day calling and e-mailing the Board of Elections to get answers.

“I am furious and devastated and I just feel stupid,” Carlton said. “I feel so angry.”

Both women agreed that this election meant a lot. They had spent a good deal of time researching candidates.

“I have been voting since I was 18,” said Carlton, 38. “This is the most important election of my life so far.”

Burkhardt planned to go back to vote late Tuesday. She worried about those who might not be able to return.

“I worry about the other people who were there,” she said. “Maybe [they] can’t get off work. I am a person of privilege. I can go back. What if you couldn’t?”

OK. First let’s dismiss the conventional kind of explanation. The piece suggests election shenanigans, but that’s just not a good theory. For one thing the disenfranchised voters left the polling station aware that something funny was going on. We need to find another cause.

If Chicago is like most places, the election official will be a local bureaucrat, postal officer or other person renowned for behaving in a predictably focussed attention kind of a way. There will likely be a strong addiction to the kind of low level background stress that such organizations generate within themselves. This will lead to a deeply habituated tendency to defer to heuristics - a fixation on proceduralism. Now elections are important and are taken seriously. The official might even need to be sworn in, and the apparatus of the Voting Procedure will have been explained in great detail. There isn’t much to it, but the Sacred Pen, Sacred Ballots and Sacred Box will each have been considered with due solemnity. There will have been no doubt that the Sacred Pen is an integral part of The Procedure, which is given a strange, ineffable mystery because it gives its devotees a stress hit with chemistry similar to the effects of cocaine.

So with the system boundaries and heuristics thus arranged, we arrive at the moment when a human being deferred to the authority of a broken pen. This caused a broken narrative when others also noticed, which the overly “compliant” official, filled with background fear, stress and defensiveness, subconsciously filled in with confabulation. The pen uses invisible ink! We can forget our pride in our wonderful technological society. The fantasy is funny not because it is alien but because we all recognize the poetic resonance with the Brothers Grimm!

Confabulators have a convincing certainty about them - it’s probably something to do with having suddenly reduced their fear. That, the tendency to groupthink, and likely the deluded one’s Armband of Office would all have pressured all the other workers and the voters into a textbook conformity experiment. The rest, as they say, is history.

It’s important to realize that no white cats were stroked in the making of this mess. The key event - the invention of the invisible ink - occurred without conscious reflection, in a mind squeezed like a foot in a too tight shoe, at a peak of reactive fear. Unfortunately, no conspiracy is necessary for there to be damage. On this occasion the delusion has seen daylight and at least some voters have had their ballots collected, but what tends to happen is that the nonsense kicks around and causes knock-on evasive fear much worse than the threat posed by a broken pen.

I once saw a work in progress management system that was working very well, but for some reason scaled really, really badly when it produced reports. The thing was quite well structured, there were about a half dozen people on the team, and a myth that a database server was inevitably compute intensive. The business logic never directly used the database - instead there were higher level calls. There was something odd about those calls though. When I tried to understand them, at first I failed. I just wasn’t getting it. I asked around, and realized that the team members weren’t talking straight to me. There was something very funny going on.

Eventually I got to the bottom of it. The guy in charge of the database stuff had a blind spot. He obviously had no concrete mental model of what a C program does and/or he’d never done the thing of thinking things through, round and round, explaining everything to himself. He’d written some frighteningly good code by applying rote learned incantations, but there are limits to the possible. He didn’t realize that an SQL query could be assembled in a buffer with the things to search for filled in between the SELECT, WHERE, JOIN and so forth. All his queries were in hard coded literal strings. He’d got some quite clever literal strings, which worked well with the somewhat distorted database schema to implement the bizarre programming idiom. These strings would be issued to the database, and then he’d parse and filter the result sets to find the records he wanted, doing memory management like crazy while he was at it, but never cottoning on to the fact that his query strings were just char * pointers that could point to any text he wished to cruft up!

There must have been blood on the carpet at some point (I didn’t find out about that, and didn’t want to), because this whole business was a no-go area. When discussions approached it, otherwise rational people would start logic-chopping and weaseling, as they attempted to make a continuous transition from the truth to Mr. Database’s misunderstanding. The sheer amount of processing needed to support the wriggling must have detracted from their work. My job was supposed to be tweaking the database and UNIX platform to make it go quicker, but the Project Manager was a canny fellow. I eventually figured out the real reason he’d hired me was to walk in, speak the truth and then leave discreetly.

The funny thing was, Mr. Database wasn’t stupid, just fixated on a misunderstanding and without a rich enough model to get him out of it. I wrote a “test program” that prepared SQL with sprintf(3) to INSERT and SELECT some pattern data, and showed it to him along with some “timings” (which I didn’t really care about at all). As soon as he saw what the program was doing he became very excited, and immediately saw how make things go faster.

Still, I’m looking forward to the Chicago Council coming to order in their splendid new Robes of Office. Will there be webcams? ;-)

All Postings, Stress Addiction, Programming

Insight, Composition and Patterns

Another very interesting paper on the neuroscience of insight has appeared. In Deconstructing Insight: EEG Correlates of Insightful Problem Solving Simone Sandkühler and Joydeep Bhattacharya monitor various brain areas with an EEG while giving subjects the usual kind of cognitive flexibility tests. They amplify the number of insight events by giving the subjects time limits and hints, and look at the EEG outputs when the subjects report insights and correct solutions.

They see the same two modes - relaxed and focussed - with insights occurring when the EEG shows states associated with relaxation:

Interestingly, we found for timeout trials that would lead to a correct solution after hint presentation, a strong alpha ERS. Increase in alpha activity is usually associated with a relaxed, less active brain, because alpha power was largest in states with eyes closed, i.e. states without focused attention.

There are two ways that this paper is an interesting addition to the work I quote on the Neuroscience page. Firstly, this paper talks about “insight”, while the others refer to “cognitive flexibility”. But the tests are the same, the faculties measured are the same. It’s a bit like speaking of breakpoints at different software layers. The call to draw a widget uses calls to draw primitives like lines and rectangles. So by citing this paper we can get to the neural correlates of insight - a term that I’m more comfortable with than cognitive flexibility, because to get good at this stuff we have to see the causes and effects right through the stack from neurotransmitters to effects on social customs.

The second thing is that it discusses various brain areas other than the prefrontal cortex. Clearly the mode shifting between focussed and juxtapositional involves the co-ordinated transition of multiple subsystems. Here’s a lovely example of such a thing - the Gibbs Aquada:

How cool is that?

Now you might say, “This is all very interesting. If I ever need a fundamental conceptual advance to maintain the backoffice system I am cursed with I’ll remember it!”

But in fact insight is not just in play in famous examples like Einstein imagining himself riding on a lightbeam, Newton’s fruit perturbed brain cells or Archimedes famous streak. To demonstrate this, consider a screenful of code. What does it do? It’s a crazy question, because a screenful of code could do many, many things. Well, unless it’s Java or calling Win32. The fanout of possible behaviours for a program as we add syntactically correct lines is much fatter than the fanout of possible chess game states as we add moves. (It doesn’t matter that most of those behaviours are not interesting - they still fatten the state space.)

Writing a program is in essence a synthetic act, where insight suggests that a certain class or data structure can represent the problem domain and make the desired operations possible. This cannot be done by random guessing or exhaustive search. The upper bound on the effectiveness of our programming activities is how readily we can be insightful. The lower bound is fixed by how well we do all the other things that distinguish grown up professionals from script kiddies, but those other things can easily provide busy work to allow us to forget when we lose sight of how to address the upper bound.

This idea of a search space which becomes unmanageable when insight is lost can explain a persistent and specific abuse of design patterns. I once had the interesting experience of being a lab rat for a disciple of Christopher Alexander, the architect who invented the idea. I was renting an old house that had recently been refurbished - in places rebuilt. It was minimal, even more so than is customary for traditional Spanish fincas. Yet it was a warm minimalism, with niches for lights that naturally threw diffuse pools where one wanted them. I kept getting flashes of the rich vegetation on the shady side of the building, along sight lines that extended all the way across the space. It had the quality without a name, and when I eventually met the landlord who had planned the rebuild, I rather diffidently mentioned software engineering and the connection with Alexander.

He roared with laughter and said, “Christopher Alexander is the reason why I do not have a large architectural practice!” He went on to explain that when he’d first moved to the area he’d built a hut and lived on the beach, as recommended by Alexander. For three years he rebuilt his hut, until he understood why the features of the traditional architecture were as they were. “Then I was ready to build in this place!”.

He’d done very well for himself, having built the houses for all the local dignitaries including the serving Spanish Foreign Minister, but his approach took the investment of his high quality time. He didn’t have anything for architectural grunt workers to do, although he ended up employing his own team of builders, because he couldn’t communicate the importance of quality in the small to contractors.

When I mentioned Alexander the landlord was happy, because he’d felt embarrassed about spying on me in order to see how I used the space. He said that he’d already noticed I’d stretched a washing line between two trees, and wondered why I’d put it there. I explained that I’d dumped the wet washing inside the kitchen door on a clean surface and then hung it in the sunshine, and the following day the construction crew turned up to install a pair of solid mountings for the line! This went on for several months. Fortunately I had read Report On Probability A at a suitably impressionable age. While the architect was watching me, I was watching the architect…

What your Shao Lin Ninja Alexandrian actually does, practicing in the original field of application, illustrates the approach that is necessary for success. Yes, there are design patterns. They are “out there”, waiting to be discovered, and some of them recur frequently. But the discovery takes years of work, finding the vocabulary of the most common patterns latent in spaces, and even then and even for an expert, further exploration and iteration is needed in each case. The conscious involvement of a devotee is necessary. This doesn’t necessarily invalidate Alexander’s early ambition of democratizing architecture (remember 4GLs?), but it does mean that the worker needs familiarity with the idea of patterns, and the cognitive state necessary for spontaneously recognizing the patterns that best simplify, chunk, make a theory of, or compress the problem domain. However egalitarian the toolkit, there is an essential act of becoming explicitly conscious of what is needed, which must be performed somewhere, in some terms, by someone.

If that cognitive state is lost, the toolkit or pattern language begins to perform a different job. Instead of a bunch of sensitivity building suggestions in a sea of subtlety, they become a massively reduced search space. The Gang of Four become a firm of stone engravers, and we get a kind of system integration:

I believe that both paths are entangled in Alexander’s 1996 OOPSLA talk, The Origins of Pattern Theory the Future of the Theory, And The Generation of a Living World. For example, at one point he says:

So there began developing, in my mind, a view of structure which, at the same time that it is objective and is about the behavior of material systems in the world, is somehow at the same time coming home more and more and more, all the time, into the person. The life that is actually in the thing is correlated in some peculiar fashion with the condition of wholeness in ourselves when we are in the presence of that thing. The comparable view, in software design, would tell you that a program which is objectively profound (elegant, efficient, effective and good as a program) would be the one which generates the most profound feeling of wholeness in an observer who looks at the code.

I am without reservation quite happy with that. It remains true even if I extend the context of the statement to include every practical observation or speculation I’ve encountered as I’ve asked why there is structure to be perceived, how we perceive it, why we sometimes can’t perceive it - and even what the ancients (who spent much less of their time in focussed attention) had to say about these things.

Yet later in the same talk Alexander says:

Now we come to the crunch. Once we have the view of wholeness and centers, linked by the fifteen deep properties, we have a general view of the type of whole which must occur as the end product of any successful design process. And because we have a view of it as a whole, we are now able to understand what kinds of overall process can generate good structure, and which cannot. This is the most significant aspect of The Nature Of Order, and of the new results I am presenting to you in this Part B.

It means that we can characterize not merely the structure of things which are well-designed, but we can characterize the path that is capable of leading to a good structure. In effect, we can specify the difference between a good path and a bad path, or between a good process and a bad process.

In terms of software, what this means is that it is possible, in principle, to say what kind of step-by-step process can produce good code, and which ones cannot. Or, more dramatically stated, we can, in principle, specify a type of process which will always generate good code.

This sounds to me like a Software Factory. An organization of what the Consciousness Studies people call zombies - “a hypothetical being that is indistinguishable from a normal human being except that it lacks conscious experience, qualia, sentience, or sapience”. The zombies provide hardware to run an Artificial Intelligence, which is stored in at most a few MBs of paper manuals. The AI is a remarkable achievement, since it requires seconds to hours to perform a single fetch or store operation using it’s shuffling hardware, yet it can miraculously transform ambiguous requirements documents into robust systems elegantly and without error. All hail the process!

I believe the Software Factory is a revealing nonsense. The comparison with the AIs we’ve not yet built using GHz processors and petabytes of store exposes the truth. The only reason anyone ever spent all those work decades on it is that sitting in the meeting rooms, listening to the endless drone of corporate doublespeak provides an excellent source of low level background stress. To get a sense of it, glaze your eyes with Obtaining the Benefits of Predictable Assembly from Certifiable Components (PACC), an offering from the Software Engineering Institute. That’s alright then - problems solved, who’s for a beer! And when they wake up a bit for the conclusion, the celebration of process fits with the stress addicted tendency to “default to heuristics”. We’re all “a tiny, tiny piece of it all”, so no-one needs to exercise the personal awareness that Alexander starts by recognizing as essential.

The idea that our thinking about pattern languages has been trying to pull in two opposing directions, trying to be useful both to people with and without access to their juxtapositional faculty at the time, leads to the question of what would happen if we tried to do one job well. Those without juxtapositional awareness at the time aren’t going to be able to make the inductive jump to insightful solutions however they are tooled up. So what if we abandon that objective and ask how we might make pattern languages more useful as cognitive aids for those who are juxtapositionally aware. Such developments would quickly seem pointless or incomprehensible to those without juxtapositional awareness.

Could that be something to do with the growing interest in functional programming? All of the form with none of the mechanizable boilerplate, and disconnected from the busy-work that addresses the usual problems caused by the boilerplate? Perhaps the net has enabled a critical mass of people interested in having such a conversation to connect, as it has already enabled groups to leverage shared, rich models instead of managerial and process overhead in open source development. If we remove the objective which contains an internal contradiction, might we find that Scheme has been our pattern language all along?

Neuroscience, All Postings, Programming