Lewis, Horabin and Gane 1967: an information design classic

With the current interest in visualisation (of data, issues and processes), it's worth reminding ourselves of the pioneering work done in the 1960s by Brian Lewis, Ivan Horabin and others. They developed flow chart versions of complex rules and regulations which used the same 'if this then that' approach as the algorithms used by computer programmers. They called them 'ordinary language algorithms'.

They published this work in 1967. The full citation is BN Lewis, IS Horabin and CP Gane (1967), Flow charts, logical trees and algorithms for rules and regulations. CAS Occasional Papers 2. London: HMSO.

I've scanned it and placed here under Open Government Licence. Click here to download it.


Here's a before-and-after from a 1966 tax letter sent to several million UK taxpayers:

"If the asset consists of stocks or shares which have values quoted on a stock exchange (see also paragraph G below), or unit trust units whose values are regularly quoted, the gain or loss (subject to expenses) accruing after 6 April 1965, is the difference between the amount you received on disposal and the market value on 6 April 1965, except that in the case of a gain where the actual cost of the asset was higher than the value at 6 April 1965, the chargeable gain is the excess of the amount you received on disposal over the original cost or acquisition price; and in the case of a loss, where the actual cost of the asset was lower than the value of 6 April 1965, the allowable loss is the excess of the original cost or acquisition price over the amount received on disposal.

If the substitution of original cost for the value at 6 April 1965,  turns a gain into a loss, or a loss into a gain, there is, for the purpose of tax, no chargeable gain or allowable loss."

This is the algorithm:

They list 5 key characteristics of algorithms, which I think we'd do well to apply to any design approach to complex information:

In essence, algorithms are memory tools. They relieve the stress that complex text places on working memory, because you only have to deal with one simple issue at a time. They take you to a clear decision point. Of course, the converse of this is that you don't get an overview of the system as a whole, except by following all the threads. So to build expertise, or to make decisions you may need to read them several times following different paths.

Another good source is: Ivan Horabin, and Brian Lewis (1978). Algorithms. Englewood Cliffs, NJ: Educational Technology Publications.

A personal note

On a personal note... Brian Lewis was a Professor in the Open University Institute of Educational Technology when I joined it in 1974 as a very junior research assistant. I remember him as deeply interested in human learning and the structure of knowledge, and as a generous and helpful colleague. His interesting lectures and conversations left an impression on me. One in particular comes to mind:

As educators (and information designers are that among other things) how do we are see ourselves?

There were others in his list, and he may not be the only person to think this way.

Justified uncertainty vs unjustified certainty

Brian Lewis was also interested in errors. Instructional designers at the time were very focused on the role of feedback in reinforcing learning. Brian pointed out that erroneous understanding is not always evidenced in test results, and that 'for much too long, the study of error has trailed like a shadow behind studies concerned with the problem of truth'.

Information designers need to be concerned about this. In his 1981 paper entitled 'An essay on error', Lewis points out that when trying to distinguish between reality and an illusion, at some point we have to stop deciding: 'the truncation in our decision-making... shifts from a state of justified uncertainty to a state of possibly-unjustified certainty.'

Users of information would like certainty, but it's not always justified. How information designers handle this is pretty important. Trying to present complex content algorithmically can be a good test of the extent to which certainty is possible.

Rob Waller. November 2019