Creation: line & form

Take a wander through my creative thinking as I try to complete at least one charcoal drawing on paper each day over the last fortnight or so – and have a look at the results.


This blog post is a little delayed. I enjoy writing and so I have been enthusiastic about keeping my commitment to a Charles 6.0 post every week – doing so has engaged my creative energy more than adequately. But (and there is always a ‘but’ isn’t there) I have found that this work has to some extent been absorbing creative energy from other things I have wanted to pursue.

A specific example is my wish to engage further with drawing – as I noted in an earlier blog, ‘Back to basics’, I have been going along to a U3A drawing group every fortnight for the last couple of months, revisiting the fundamental skill of free-hand drawing.  I noted that I found it a bit challenging – my engagement with this skill was longer ago than I first thought – but that I would persist , strongly feeling the need to relearn the basic skills of seeing and constructing images by hand.  While I have engaged happily with the group now, to improve I really needed to do more.

So a couple of weeks ago I decided to prioritise drawing over blog writing, to try to make time to do at least one drawing each day, with  a view to sharing the results in a blog somehow. I have focused on the most basic drawing technique using black charcoal on paper, using objects scattered about the house, some of which I have used for creative inspiration in the past.  For a number of them I have gone on to use the drawn image in creating digital collage with photographs from the streets, adding a layer of interest and complexity.  Other events have conspired to divert my focus over the last fortnight, but on as many days as possible I have attempted a drawing.  These are what I will focus on sharing in this blog, interspersed with digital play with some of them, and also a few random notes or jottings accumulated on my travels during the couple of weeks -apologies for more or less adequate segues 😉

20171109_090307.jpgSo to my first drawing, of a retro china ballerina figurine which offered the opportunity to explore solid object modelling – a black figure in a pink dress,garnered from a op shop a couple of decades ago.

The next drawing was of a less substantial object, being one of a pair of brass butterflies, with filigreed detail best gestured at rather than precisely rendered.  I like the idea of something as delicate and insubstantial as a butterfly being expressed in a material as solid as brass and I had captured a rather interesting light effect  on the pair a couple of months previously that echoed that thought on transience…


On the subject of ephemera, I saw Facebook an “occasional address” from comedian Tim Minchin for a graduation ceremony at his old Uni, The University of Western Australia – gosh, I just noticed that his robe echoes a butterfly!

While witty and amusing, his nine life lessons resonate enormously with me – in summary:

  1. You don’t have to have a dream
  2. Don’t seek happiness (keep busy and make someone else happy, and you may get some as side effect)
  3. Remember, it’s all luck
  4. Exercise
  5. Be hard on your opinions (Identify your biases, your prejudices, your privileges)
  6. Be a teacher (Even you are not a teacher, be a teacher)
  7. Define yourself by what you love (be demonstrative and generous in your praise of those you admire. Be pro-stuff not just anti stuff)
  8. Respect people with less power than you.
  9. Don’t rush (there is only one sensible thing to do with this empty existence, and that is, fill it).

20171109_090522.jpgBut back to the drawings – next I attempted a rendition of a china Buddha figurine which once again I picked up in an op shop a long while ago.

While not formally an adherent, from what little I understand I am sympathetic to many Buddhist tenets, framing the possibility of a joyful embrace of the emergent universe – echoed in Tim Minchin’s riff on our ’empty existence’. Such was the intent of my subsequent digital employment of the sketch …



One lunchtime I went along to  the presentation  by Mark Deuze, Professor of Journalism Studies at the University of Amsterdam. It was an engrossing presentation – an interesting and engaging hour or so well spent. He spoke about journalism start-ups & his forthcoming book, Beyond Journalism, which was the basic theme of his talk.

20171115_111047.jpgHe described the operating environment for journalists as being ‘liquid’, meaning that conditions are changing faster than ways of acting can consolidate into habits and routines.  He was essentially describing the hollowing out of institutional journalism. He described the situation of media workers in terms of precariousness or precarity; that is not knowing and not having control over what will happen next.

There was discussion of the journalist as a DJ, able to mix and match roles and value systems, of portfolio careers and cross subsidised work styles, of journalists as individual brands, of how contemporary media workers are never ‘not at work’.  Coincidentally, the next day the Productivity Commission published a report is called “Shifting the Dial” which looked a range of public services in Australia, but had some damning observations around higher education. While acknowledging the important role universities play in society it also pointed out that the way they are funded and operated is leading to less than ideal outcomes for students.  The report I read noted that:

Only 70 percent of graduates are employed in full-time work. That’s the lowest level since records began in 1982, and we’ve been on a steady decrease for the past decade. Also, nearly one-third of the graduates who are employed are working in jobs that don’t require their degree.

But wait, there’s even more! Graduate starting salaries have been declining when compared to average earnings, which means degrees are becoming less valuable at the same time the cost of study is increasing.

I found this profoundly interesting in the context of Tim Minchin’s address and given the observations I had made in my recent blog Thinking about education, work & AI, with the long story short  view that the world of information (how it is stored, distributed, navigated and utilised) has changed immeasurably over the last couple of decades, and the tertiary sector has struggled / is struggling to keep pace and stay relevant.

I guess we might just have to armour-up … I enjoyed sketching the suit of armour figure that I have treasured as a birthday gift from my eldest son many years ago. If it looks a little robotic, I guess that guys in armour actually did look a bit like robots.


My next subject perhaps also holds a shield – this was a kitsch china figurine I found by the roadside, which I painted into more basic colours and augmented with other found objects to create a sculptural assemblage that engages enigmatically with time and change.


I blended the drawn image with a recent photograph of weathered street posters to good emergent effect, while I have used the assemblage in photographic studies from time to time as well …


Nearing the end of the report into my adventures with charcoal, I drew this small wooden Balinese bust that I  picked up somewhere on my travels.  Perhaps an earlier photographic study captured something of the calm and reflective air it conveys …


… while the experimental combination of the sketch with a image of a dilapidated street poster  produced an image, that for me, somehow provides a powerful abstract summary of what this blog is trying to do and say.






The specialist generalist

Over the years I have come to understand that my real aptitude lies in helping to explore and define problems rather than to craft specific and technical solutions. Do this well, as I have at various stages, and you’ll find yourself in positions where the latitude to sit and think is extended significantly and can in fact become the accepted reason for your continued employment and contribution. At that point you have actually moved beyond simply being a simple, practical generalist and have started to engage with the role of the specialist generalist. This is someone who, rather than simply bringing together a variety of specialties, works in the world of the complex and the unknown, to define and appreciate problems and then to architect the shape of possible approaches and solutions. I must say I am enjoying the freedom of Charles 6.0 to suit myself as to those problem domains and chaotic edges in which as a specialist generalist I choose to dwell.

Since ceasing full time work, and starting my Charles 6.0 transformation, I have met a number of people (some more significantly advanced in years than my own) that continue employment on a consulting or contracting basis. It has sometimes been suggested that perhaps that’s something I would like to do. Quite apart from being pretty fully occupied without having any work-like obligations, one reflection on this has been that these people normally have a highly specific and singular expertise that is valued in the marketplace, such as database programming, construction engineering or town planning.  The world of the consultant contractor is the world of the dedicated specialist.  Quite reasonably most clients are looking for someone to undertake a specific task with well-defined outcomes – that way they know they will get at least an approximation of what they are paying for.

At various times during my working life I have worked as a self-employed consultant/contractor. To be completely frank I’ve never really been all that good at it.  While I easily discharged the usually IT related tasks (such as application coding or database design) entrusted to me by various clients, I generally wanted to do more and often found the focused, repetitive aspect of the work they wanted me to specialise in frustrating and somewhat unfulfilling – when I’ve done something once I generally want to solve a different problem or acquire a fresh skill. The basic problem is that I’m interested in too many things.

Over the years I have come to understand that my real aptitude lies in helping to explore and define problems rather than to craft specific and technical solutions.  However, usually people either feel they have a good handle on what needs doing or they lack the trust necessary to commission someone else to explore the problem space.

By and large there is also an inclination to ‘rush to solution’ – there is little appreciation of the art and skill of sitting with a problem long enough to understand its true demands and dimensions – which quite frequently are more or less different to the immediately presenting issues.   Newsflash: that is not a proposition easily sold into a competitive market place – there are not many clients willing to pay someone to sit with a problem – they want them solved – ASAP!

I actually found the best place to practice that particular art is as a full time employee – oftentimes you can layered the necessary time spent sitting and thinking in among all the busy work that employers seem delighted to visit upon their workers. It is here you can cultivate the position of the generalist employee, easily deployed to various tasks but sometimes lampooned as the ‘jack of all trades that is master of none’.  However as the Wikipedia entry about that saying notes, such an individual may be a master of integration, knowing enough from many learned trades and skills to be able to bring the disciplines together in a practical manner – what I would call a practical generalist.

Do this well, as I have at various stages, and you’ll find yourself in positions where the latitude to sit and think is extended significantly and can in fact become the accepted reason for your continued employment and contribution. At that point you have actually moved beyond simply being a simple, practical generalist and have started to engage with the role of the specialist generalist. This is someone who, rather than simply bringing together a variety of specialties, works in the world of the complex and the unknown, to define and appreciate problems and then to architect the shape of possible approaches and solutions.

One conceptual tool I have found useful to frame complexity in this context is what is commonly known as the ‘Stacey diagram’, so named after the British organizational theorist and Professor of Management Ralph Douglas Stacey. It has apparently been frequently adapted by other writers, as noted by Wikipedia often in ways not consistent with Stacey’s – to the point that apparently ‘he dropped the diagram and now argues against its use’.  I am as guilty of appropriating and extending his original thinking as anyone!  But I find it incredibly useful as framework for analysis and thought, and so I have sketched my own take on it, as illustrated here.


There are two axis to the diagram – Uncertainty and Disagreement:

  • The horizontal x-axis is Uncertainty. When an Issue or decision is close to certainty it is because cause and effect linkages can be determined.  This is usually the case when a very similar issue or decision has been made in the past, you can then use past experience to predict the outcome with a good degree of certainty. The other end of the certainty continuum is ‘far from certainty’. This is when the situation is unique or at least new to the decision makers.  The cause and effect linkages are not clear.  Using past experiences is not a good method to predict outcomes in the far from certainty range.
  • The vertical y-axis is Disagreement. This measures the level of agreement about an issue or decision within the group, team or organisation.  The degree of agreement on what should be done is an important factor in determining success.

I have found a very useful and succinct exploration of the Stacey matrix in relation to art of management and leadership, on this GP training resource archive.  It maps various forms of decision making onto the matrix: Technical rational in the ‘simple’ region which is close to certainty and close to agreement – in terms of this blog the place for the specialist; Political for the area having a great deal of certainty about how outcomes are created but high levels of disagreement about which outcomes are desirable; Judgmental for the opposite set of issues with a high level of agreement but not much certainty as to the cause and effect linkages to create the desired outcomes.

Political and Judgmental for my purposes here are the realm of the ‘practical generalist’.

And then there is the Complexity zone which lies between these regions of traditional management approaches and chaos and is the natural home of the specialist generalist.


A few observations on what is required to work as this close to the edge of chaos – for it to be a ‘zone of opportunity’ …

  • Be prepared to have absolutely no idea what you’re doing much of the time!  The qualification is to be able to ascertain rapidly what needs to be known and to acquire that knowledge rapidly rather than to have a stored repertoire of specialist knowledge to hand.
  • Work on the basis of principles rather than rules. I like this recent post I found on LinkedIn – ‘Burn Your Rule Book and Unlock the Power of Principles’, which observed “Principles, unlike rules, give people something unshakable to hold onto yet also the freedom to take independent decisions and actions to move toward a shared objective. Principles are directional, whereas rules are directive.”  But a specialist generalist needs to be prepared for uncertainty even here: paradigm shifts in terms of the set of principles to be applied in a given space, to find space for innovation and novel principle to emerge.
  • Be a systems thinker – I like the following illustration of the Tools of a System Thinker (attached to a tweet by @SYDIC_ITALIA Chapter Italiano della System Dynamics Society Internazionale – no further reference to acknowledge). However a specialist generalist must be an open-ended systems thinker, sensitive to emergent systems and to proto-systems at the edge of chaos.  You cannot insist on systems at all costs, but need to utilise the insights systems thinking can generate.  Be a network systems thinker, value the connections in the models you will perceive and generate as well as utilizing networks of skill and knowledge around the problem space.


It took me a long time to recognize and name myself as a ‘specialist’ generalist.  It is a very difficult and demanding role, one that is difficult to sell and articulate, but one which can deliver dividends with multiplier effects well beyond the contributions of specialists and practical generalists, since it is the role that seeks innovation, requires agility and rewards resilience.   That said, in the end with respect to my specialist computer-related skills, I decided to employ my abilities to my own ends rather than to try to meet the often poorly articulated and often contradictory needs of clients, be they internal or external.  I must say I am similarly enjoying the freedom of Charles 6.0 to suit myself as to those problem domains and chaotic edges in which as a specialist generalist I choose to dwell.


AI – where might we be going?

This blog is based on my observation of AI since I first came across the idea working at the Royal Blind Society in the early 80s – although what AI was then and what it is now are very different beasts. Thinking about developments for the next decade, I settled on the term ‘Sentience’, which deliberately avoids the term AI (although as a loose umbrella term it can be read in), to choose a word reflecting a more modest level of machine capability. This ‘sub-intelligence’ if you like is conceivable from the intersection of a number of network and ICT trends / technologies emerging by the start of the decade (i.e. 2020). It seems to me that ‘sentience’ might best be described in terms of being surrounded by, embedded in, environments that are in some way aware of the individual and their context – various relationships to things, information and other people. It will not be complete but its emergence seems likely to be a dominant theme for the next decade, a logical inheritor of the consequences from digitalisation, convergence and then network developments, with the same kind of wide ramifications for culture and society.

I have been interested in the interaction of humans with technology since my initial university studies in anthropology and sociology.  We are not defined by our tools, but our tools have functioned as an extension of ourselves and as facilitators of interaction with our environment. Over the past millennia they have produced an accelerating transformation of the shape and pace of human society.

This post is shaping to be a two part blog: the first exploring my thoughts and experience with Artificial Intelligence (AI) technology – broadly defined – and the second next week looking at the discussion about the implications of this technology for the world and future of work.

I first came across AI when I was working at the Royal Blind Society (RBS) in the early 80s, although what AI was then and what it is now are very different beasts.  At the time I was completing my Master of Commerce, in which I had pursued my interest in technology in various topics ranging from Information Systems Design to Industrial Relations.  It was in the latter that I engaged with the ideas of sociologist Daniel Bell and the notion of the ‘post-industrial society’.


It is interesting that the term ‘convergence’ had been much discussed in the seventies. It was a product of the Cold War – the idea that industrial economies would converge in their structure and organisation and that essentially Russia (then the USSR) would come to resemble the US and Europe. There is a whole thesis waiting to be explored in figuring out what happened to that idea in the vortex of history – too much to go into here.

The key learning for me was that the complexity and interrelatedness of technological innovation and change with economic and social factors such as:


  • Factors of production
  • Technological interdependencies and linkages
  • Organizational structures
  • Sectional, regional and individual distribution of income and wealth. International interdependencies
  • Public and private demand


I applied this learning at the RBS when researching the impact of technological change on the employment context and prospects for visually impaired people.  It was a time of considerable technological excitement as the personal computer began to penetrate the mass market – a signature moment was when TIME Magazine nominate the PC as ‘machine of the year’ for 1983.


One element of technological change which played into these investigations was machine vision: while wildly futuristic at the time, it was also becoming almost imaginable and at the apparent rate of change and innovation seemed possible in a foreseeable future. It turned out that machine vision, particularly ‘in the wild’ was actually much harder than might have been apparent and is something that is only now (2017) starting to find wide spread use in things like self-driving vehicles – and so far as I know has yet to find practical application in the everyday lives of visually impaired people.

However this sparked my general interest in the whole field of computers mimicking or emulating human cognition or perception. Expert systems were an area of market enthusiasm, one that I found particularly interesting. I actually crafted and experimented with my own primitive expert system shell, written from scratch using turbo Pascal which involved delving into the arcane and technical worlds of generative grammars and token parsing as well as algorithmic inference processing.

But ultimately both for myself and the world at large expert systems proved to be a dead end. This this was primarily due to the issue of knowledge capture that is the sheer effort required to manually encode knowledge into decision tree type language. The other limiting factor was the limited processing power and memory storage available on their computers off at the time.

The general interest in AI peaked by the end of the decade – the cover of this Economist 1992 special feature would not have been out of place today, but the discussion is much more about the limitations and cumbersome nature of the technology than grand horizons.


Increasingly the view came to be that any particularly advanced or clever piece of coding was seen as intelligent while it was a novelty, but rapidly became ‘part of the furniture’ and thence became part of the ‘dumb’ and rather pedestrian reality of IT which came to dominate our working lives.

During the course of the 90s the word ‘convergence’ at least in tech circles changed and came to be much-discussed in terms of the coming together of the traditional silo platforms of broadcasting, telecommunications and print. Pervasive digitalisation  broke the legacy nexus between the shape of content and the container which carried it – a voice call was no longer solely defined by being carried on a plain old Bakelite telephone network; a TV show no longer solely by arriving via a transmission tower and home receiver (the same for radio shows); music spread rapidly beyond the domains of the vinyl record, compact cassette and CD – it got ‘shared’ online; and the Internet carried news much further and faster than a newspaper.  This meant that commerce and regulation constructed on the premise that content could be priced and controlled by how it was delivered increasingly lost its force, both in logic and in practice.

Then over the first decade of the 21st century (the ‘noughties’), IP-based networks and then social networks came to play an ever more important role.  This has meant content became non-linear, interlinked and ‘uncontained’ while people increasingly expected to connect and communicate seamlessly – anywhere, anyhow, anytime. Entire new and massively successful network businesses emerged in the second half of the decade – Google and Facebook to name the most obvious.

‘Silos’ was the convenient way to describe the pre-convergence arrangements and ‘Layers’ was an important alternative way to look at the way the technological environment was changing, as a way to describe the actuality of what was called convergence.  Layers had been in common technical use for a decade or two before this, but it around at this time the general utility of the concept more generally became apparent, since it is native to the way in which networks are constructed and the Internet works.

As the noughties wore on, it also became apparent that ‘layers’ as such could not ultimately and successfully grapple with all the developments in the marketplace.  The ‘bright lines’ between layers are blurring under the impact of virtualisation and software emulation.  An example of virtualization is the way in which several physical computer servers can emulate a single large (virtual) computer OR a single large physical computer can emulate several (virtual) computer servers. This has been extended beyond the enterprise and is essentially the basis for cloud computing – the customer buys the computing and storage they need as virtual resources from the supplier who takes care of the physical requirements.  Multiple, inter-networked free-scale networks which can configure to emulate many other network forms better explain the complexity and rapid adaptability of the market in the current decade, whatever we decide to call it (the ‘tweenies’?).

So the term ‘silo’ was useful shorthand to describe the pre-nineties technological environment, ‘convergence’ summarized the nineties, ‘layers’ was useful for the noughties and ‘networks’ is perhaps most apt for our current decade, which as you may have noticed, is drawing to a close.

This ‘progression’ is reflected in the movement in the discussion of the technological environment from ‘convergence’ to the ‘networked society’ and ‘connected life’.  This shift does not suggest that the transition to the ‘networked society’ is complete, but rather that the concept of the ‘network’ better describes and encapsulates the current dominant movement and theme at work and influencing society during the decade.  Having remarked on this progression, the obvious question is to ask: what is likely to be the concept that fulfils this role in another decade’s time?

My stab at it a few years ago was ‘Sentience’.  I was deliberately avoiding the term AI (although as a loose umbrella term it can be read in) and chose a word that reflected a more modest level of machine capability. This ‘sub-intelligence’ if you like is conceivable from the intersection of a number of network and ICT trends / technologies emerging by the start of the decade (i.e. 2020).  It seems to me that ‘sentience’ might best be described in terms of being surrounded by, embedded in, environments that are in some way aware of the individual and their context – various relationships  to things, information and other people.  It will not be complete but its emergence seems likely to be a dominant theme for the next decade, a logical inheritor of the consequences from digitalisation, convergence and then network developments, with the same kind of wide ramifications for culture and society.

What is different now in the contemporary explosion of interest and practical utilisation of AI is both the remorseless contribution of Moore’s law and the breakthrough in the algorithmic understanding of machine learning and its application in what is called deep learning. This is the technique which led to the victory by AlphaGo (the Google Deep Mind app) when it played the human Go master, Lee Sedol. It is also evident in everyday examples ranging from face-recognition, language translation, predictive text and enhanced search algorithms – things that in the eighties would have been dubbed AI are everywhere!

I am not particularly married to the precise term ‘sentience’ – numerous others exist. For example Shivon Zilis, an Investor at Bloomberg Beta surveyed every artificial intelligence, machine learning, or data related startup she could find (her list had 2,529 of them to be exact).  She addressed the labeling issue of using “machine intelligence” to describe how “Computers are learning to think, read, and write. They’re also picking up human sensory function, with the ability to see and hear (arguably to touch, taste, and smell, though those have been of a lesser focus) … cutting across a vast array of problem types (from classification and clustering to natural language processing and computer vision) and methods (from support vector machines to deep belief networks).”

She noted that:

I would have preferred to avoid a different label but when I tried either “artificial intelligence” or “machine learning” both proved to too narrow: when I called it “artificial intelligence” too many people were distracted by whether certain companies were “true AI,” and when I called it “machine learning,” many thought I wasn’t doing justice to the more “AI-esque” like the various flavors of deep learning. People have immediately grasped “machine intelligence” so here we are.

And I landed on ‘sentience’ – it is important to note that it does not indicate an ‘end state’ but rather flags a way to discuss a possible dominant theme of its decade (say 2020-2030). Another theme will arise and it is relevant to consider what sentience would not describe: to establish the boundary conditions for the concept and think about what may remain ‘undone’ by 2030-ish.  Thinking beyond that boundary may in turn give clues about the shape of the decade and those to follow … noting that such a shape is impossible to discern beyond broad conjecture.

One direction for such conjecture might be about the emergence of ‘machine autonomy’. It can be useful (in terms of imagined scenarios) although increasingly dangerous (due to the temptations and risks of predictive hubris) to speculate even beyond the rise of autonomy to further phases, perhaps the realization of fully conscious artificial intelligence, perhaps the emergence of essentially incomprehensible ‘alien’ machine-based intelligence:

  • 2030s – ‘machine autonomy’?
  • 2040s – AI ‘awareness’?
  • 2050s – ‘Alien’ intelligence?

It occurs to me that perhaps this is where the fruits of ‘convergence’ as mentioned in this blog have come full circle.  It seems that the developments which can loosely be pulled together under the umbrella term AI are genuinely flagging the arrival of ‘post-industrial’ society, that the world Daniel Bell conjured with is emerging in front of our eyes, even if we do (can) not accurately perceive it.  However the shapes we can discern are certainly the source of some anxiety, particularly as related to the world of work – and that will be the topic for my next blog post …


Life cycle analysis

The life cycle concept is a useful tool for systems analysis and design thinking – by adding the fundamental idea of renewal to what might otherwise be seen as linear processes. This then captures a more dynamic view of systems, accepts change, favours flexibility and explicitly accommodates feedback into design. The life cycle idea is also closer to the general dynamic of social relationships and can have a powerful personal resonance, which can help bring design closer to people. However, does the pace of change and the rate of innovation that simply will not slow perhaps have implications for the lifecycle of humanity itself, connecting the current shape of change with the deep development of human language, technology and society? Maybe we are coming to the inflection point where we might contemplate the historic end of the ‘life cycle’ over the next couple of decades, at least as far as human society is concerned.

This was a tricky blog to get started – the topic opened so many doors to dense topics it was hard to get the sense of a thread to join them. So I went for a run instead and after the quiet reflection that such activity induces I had the beginning and the end … now all I need to do is navigate between them. I hope you can stay with me for the ride.


I first came across the idea of the ‘life cycle’ as a formal tool for analysis and design in the early eighties when I picked up managing the new DEC VAX computer system while working at the Royal Blind Society, and subsequently broadened my Master of Commerce degree studies at UNSW to encompass Information Systems Management (ISM).

I took to the concept immediately, firstly because I am something of a natural systems thinker, and secondly perhaps because of my engagement with biology as a subject at high school – it was my enthusiasm for social biology that drove my initial interest in studying sociology at university. In the early seventies sociology was a new kid on the block at NZ universities – in fact it wasn’t available to first year students when I commenced at Auckland, dictating an initial stint with Anthropology and Psychology instead. But I digress …

I explored the lifecycle topic quite extensively in an ISM essay from around 1986, which I unearthed in that front room filing cabinet. I noted:

A ‘System Lifecycle’ notion is often used to describe the development, planned or unplanned, of information systems based on computing tools. Although “there are many different methods for representing the lifecycle … all contain essentially the same components” (7, P.13) and these components can be viewed in a simple linear sequence. But when generalizing to the information systems activities of an organization or sizeable organizational unit, a more sophisticated understanding is necessary.

For organizations in the ‘real world’, the stability in simple linear models of change does not exist. A simple linear model would ignore the fundamental characteristic of information systems that they age and wear out like most other assets, and renewal must be part of the planning process. Therefore, as a first complication, we must accept “the complete life cycle of a system, from its initial conception to its ultimate disposal.”

An even broader sense of discontinuity to further complicate planning scenarios is pointed to by Buss when he suggests that “Complete uniformity across all IS projects is likely to be impossible because organizations will be at different stages in their use of … various technologies.”

I have chosen those paragraphs because they continue to ring true for the task of managing information technology today (the language and technology have changed, the challenges remain almost exactly the same! They also capture the essence of how the life cycle concept contributes to analysis and design – by adding the fundamental idea of renewal to what might otherwise be seen as linear processes. This then captures a more dynamic view of systems, accepts change, favours flexibility and explicitly accommodates feedback into design.

The life cycle idea is also closer to the general dynamic of social relationships and can have a powerful personal resonance, which can help bring design closer to people. This was exemplified for me as I wrote the words to celebrate my eldest sons’ marriage a few years ago, where I commented on the family pattern of ‘building up, letting go and welcoming in’ – a pattern that has been continued and confirmed by the most welcome birth of our first grandchild. I have appended the notes for that brief speech, which to my surprise was the subject of a number of favourable remarks, at the end of the blog (lightly edited to protect privacy).


I found the idea of a ‘life cycle’ to be useful well outside the narrow world of IT systems. For example I used it as a consultant in the late nineties to create a model of Neighbour Aid service delivery, identifying possible benchmark events for these Services. That model was built around the various activities and processes of groups of stakeholders (community management committee, service coordination, clients, and volunteers), reflecting what might be called their ‘life cycle’ in the organisation. The model then neatly provided a starting point to detail functions which Neighbour Aid services might usefully benchmark between themselves.

Single Code Customer Life Cycle

The concept then found applicability in my work as a consumer advocate at CHOICE in the noughties. In 2003 I advocated that a Customer Life Cycle perspective would provide a useful structure for a Single Telecommunication Consumer Protection Code. The code surfaced almost a decade later and rather than explicitly use that framework assembled various existing codes into chapters – thereby inevitably covering a number of customer lifecycle event but without that overarching logic.

I also used the idea to describe how advocacy worked, which found utility in the work of a consumer focus group convened by the Australian Communications Authority in 2004. It had the objective: “To improve the effectiveness of consumer input and influence to the regulation and governance of the communications industry.”

CDC Representational Cycle

We gave ourselves the label Consumer Driven Communications and we established a powerful logic in drafting our work into the ‘Strategies for Better Representation’ Issues Paper by combining what we dubbed (and rather crudely illustrated) as the Representational Cycle with our version of the Regulatory Pyramid (a whole other discussion probably for further blog post sometime).

CDC Reg Pyramid

CDC Matrix

This produced a matrix of 36 topics which extensively covered the field of consumer engagement with the telecommunications industry and governance – a little too comprehensive for the appetites of some in the end perhaps. The group produced an extensive discussion document and then generated a final report with numerous recommendations, all of which pretty much disappeared under the tides of history, as the Communications Authority merged with the Broadcasting Authority to produce the ACMA in 2005. Digital archaeology on the project is difficult – Google will only unearth scattered, mostly cached results. Such is life, as many famous advocates have said.

Nevertheless, as I hope I have demonstrated, the ‘life cycle’ is powerful and persuasive tool – it doesn’t fit every situation, but when it does apply it can deliver coherence and insight – both useful for analysis and design.

There is a distinct sense today of technological acceleration, and certainly in the IT world we seem to be seeing ever shorter life cycles, as we move from products and services dependent on hardware to those defined by software, and now to offerings crafted from data analysis by machine learning and AI.

It certainly seems clear that the pace of change and the rate of innovation will not slow: as one group with a real-world interest in understanding technological change (the Office of the [US] Secretary of Defense’s Rapid Reaction Technology Office NeXTech project) suggested that “… in the period the team is actually supposed to be planning for, the strategic horizon of the next 25 years, we will see technologies literally one billion times more powerful than today.”

Some draw a conclusion relevant to the lifecycle of humanity itself, connect the current shape of change with the deep development of human language, technology and society. This line of thinking is illustrated by a reviewer’s summary of the thesis in Sapiens: A Brief History of Humankind by Yuval Noah Harari:

For the first half of our existence we potter along unremarkably; then we undergo a series of revolutions. First, the “cognitive” revolution: about 70,000 years ago, we start to behave in far more ingenious ways than before, for reasons that are still obscure, and we spread rapidly across the planet. About 11,000 years ago we enter on the agricultural revolution, converting in increasing numbers from foraging (hunting and gathering) to farming. The “scientific revolution” begins about 500 years ago. It triggers the industrial revolution, about 250 years ago, which triggers in turn the information revolution, about 50 years ago, which triggers the biotechnological revolution, which is still wet behind the ears. Harari suspects that the biotechnological revolution signals the end of sapiens: we will be replaced by bioengineered post-humans, “amortal” cyborgs, capable of living forever.

Perhaps this is, as the reviewer remarks, “exaggeration and sensationalism”: but maybe we are coming to the inflection point where we can contemplate the historic end of the ‘life cycle’ over the next couple of decades, at least as far as human society is concerned.

Do you agree?


Meanwhile on a more immediate & personal note, celebrating my son’s wedding I said:

I have often said that if I had known how great it was

to have kids, I would have started a lot earlier …

but of course then I wouldn’t have been able to

do it with my wife …

and it wouldn’t have been anywhere near as

great or as much fun.

I have personally learnt heaps,

and grown a lot, from being a parent .

By and large I reckon I have got more out of the

parenting deal than my kids have

… so I wouldn’t have missed it for the world.

As time has gone on, one of the things I have learned

is that the toughest job of parenting is not:

The missed sleep and 24/7 demands of the early years;

Nor the school concerts and soccer matches of the middle years;

Nor the taxi / linen / reserve banker services of later years.

No … those nurturing tasks are comparatively easy … because,

although they can be emotionally demanding

and sometimes physically draining,

you are in control.

I think the truly tough job of parenting is …

the letting go,

acknowledging you are no longer in control.

This was brought home to me when J***** set off on

his first long solo drive.

He was taking himself and his brother T*****

to a St John’s training camp in the Blue Mountains

… all I (we) could do was wave goodbye and then

trust that he (and T*****) would arrive in one piece.

However what we were trusting was not good luck –

we were trusting that J***** had learnt well

and that he would navigate safely and

independently to his destination.

And of course he did.

And he has been travelling well ever since, working hard,

making good choices and getting good results.

And today J***** is continuing his life’s journey

joining with J##### to build what will be the most

important asset they can possess between them

– a happy and productive partnership.

An old friend once remarked that the investment

he most valued was his marriage – something I have

always remembered –

I always say I have 3 super funds to maintain:

Firstly – my marriage;

Secondly – my fitness / health; and

Thirdly and only then – any actual super account dollars

– because without the first two,

money alone has little meaning or usefulness.

Growing value in a partnership is about

the opposite of letting go –

it means letting someone else in,

trusting, sharing and learning how to fit together.

What I have found to be quite miraculous about

creating a family is how that unit can grow

and extend.

I remember that when we were expecting T*****

(two and a half years after J*****)

I expressed some concern to my wife that

I loved J***** so much

I wondered how could love

another child as much.

What I discovered was that the envelope of

family love extends easily and

T***** immediately had a huge place in our hearts.

So families are about welcoming in

as well as being about letting go,

While today marks for us a continuation

of letting J***** go,

we are also happy and proud

to welcome J##### to our family,

and to the extended family,

who over the last decades

have extended such a warm welcome

to me – one for which I am very grateful .

So, welcome J##### –

there is always room for one more.

We wish you and J***** all the best

on your journey together,

and as you craft your own pattern of:

building up;

letting go; and

welcoming in.

The narrative necessity

We are currently seeing the dark side of the narrative necessity being played out in politics and social media – never mind the experts, just give me the real story with facts I can conveniently believe in …
However, there is an abiding, important and useful role for positive narratives in our lives.  

One thread of commentary about the recently concluded G20 Summit Meeting has been a loss of coherent narrative flowing from the leaders at the event. There is a deep seated need in humans for explanatory narratives, and ‘sense-making’ in terms of crafting and articulating such narratives is a critical role for leadership. We seem to need a narrative flow to give a sense of momentum and coherence to our lives, as we transition from moment to moment; without that sense of temporal structure we just have a collection of moments.

In data-driven world of today, discerning and creating narratives to make sense of the myriad data points is more essential than ever. We are surrounded by more and more dots and the effort of joining them can be exhausting and at times overwhelming. While ‘being in the moment’ is great counsel and a source of comfort in the face of life’s pressures, the narrative ‘engine’ is the key to joining the dots, establishing direction and getting stuff done.

But here’s the thing – people just want a narrative that helps make sense, preferably one that helps simplify and streamline their world. It doesn’t necessarily have to be true, but it needs to be believable and consistent with the facts on the ground as we perceive them. And in a circular twist, our preferred narrative then guides our perception and selection of ‘facts’.

This is the stuff of cognitive biases, about which we are becoming more and more aware of through studies such as behavioural economics. A recent blog in the Economist reported an interesting reflection on the persistent of beliefs in the face of contrary facts, especially noting “motivated reasoning, [which] is a cognitive bias to which better-educated people are especially prone.”


Being smart is no get of jail free card!

If we are not careful, we can simply (or very cleverly) project what we want to see onto the essentially blank world of noisy and jumbled data. This human tendency to perceive meaningful patterns within random data has been termed ‘apophenia’. In the world of data, for example, this manifests itself in ‘overfitting’, where a statistical model emerges to fit noise rather than signal and/or ‘confirmation bias’, where information is sought or interpreted in ways that seek to prove ideas rather than test them.

That’s the down side and I think we are currently seeing the dark side of the narrative necessity being played out in politics and social media – never mind the experts, just give me the real story with facts I can conveniently believe in …

However, there is an abiding, important and useful role for positive narratives in our lives. This was beautifully enunciated by Viktor Frankel in his book Man’s Search for Meaning, first published in 1946. An Austrian psychiatrist before (and after) WW2, he drew on his experiences as an Auschwitz concentration camp inmate to document how, in even the most extreme circumstances, the human urge to seek and create meaning is crucial.

He shows that we have amazing powers of endurance, so long as it somehow makes sense to us to go on living: “He who has a why to live can bear with almost any how”. On this broad basis he worked out what he called ‘logotherapy’, a technique oriented to enable men and women to see meaning in their suffering, aiming to set them free from despair and find new courage to face circumstances which seem beyond them.


Frankel suggests “that mental health is based on a certain degree of tension, the tension between what one has already achieved and what one still ought to accomplish, or the gap between what one is and what one should become. Such a tension is inherent in the human being and therefore is indispensable to mental well-being.” That tension is narrative tension as we work on the story arc of our lives, and Frankel observed in the extreme circumstances of his heinous captivity that “The prisoner who had lost faith in the future [lost that narrative tension] was doomed.”

Beyond the personal, sensible evidence-backed policy is more important than ever and policy makers need to acknowledge and resist various cognitive bases in their decision making. Those in leadership positions have a necessity and obligation to help people to develop and sustain unifying and sustaining stories about what they are doing and why.

Authentic narrative is essential to meaningful existence. I attended the NSW U3A Network 2017 annual conference a couple of weeks ago, and one of the sessions was about Big History – unsurprisingly, space here does not permit a full exposition of the history of the entire universe. However the speaker, Prof David Christian of Macquarie Uni, did a fine job which is replicated in his TED talk on the subject: “The history of our world in 18 minutes” – can I strongly suggest taking a look?

Suffice it to say that his narrative arc from the big bang to the present ‘anthropocene’ provides a very interesting story and perspective – if anyone has the ear of a G20 attendee they might send them the link!

What is risk worth: does assuming greater risk equal greater productivity?

This blog suggests that establishing transparency about who bears what risk must be an integral, non-financial part of evaluating and making policy, as exhausting and inconvenient as that may be!

Well, that was exhausting! Applying for Seniors thingies and squaring away MyGov and the ATO involved phone calls, secret questions, password and mobile number resets, multiple emails, text message codes … but all in a good cause, protecting the security of my information, managing the risk of hacking and data breaches. What’s not to like, especially as we witness major cyber-attacks such as the WannaCry ransomware exploit and the more recent Petya attack?

Most people find the subject of risk management rather dry and boring (not to mention exhausting), but as my example shows, managing risk is something close to home, and we usually feel good if we are in control.

However, as my example also shows, this comes at a cost, in this case time and effort, and sometimes in money, like buying insurance. One way of reducing those costs is to wear the risk: to assume a higher risk profile.

The point I have been pondering is whether this is a genuine productivity gain i.e. efficiency that gets more for less, or is it a re-arrangement of the deckchairs which loads the costs forward into the impact if the risk event occurs? I also wonder about the translation of this question into public policy where governments essentially assume risk on behalf of citizens.

To unpack that thinking a little.

The thread I am pulling has a slightly obscure origin – it is called Baumol’s cost disease (or the Baumol effect), described by economists Williams Baumol and Bowen in the 1960s. William Baumol died very recently aged 95 and still working …

The basic idea is that services like health care, education and government public administration activities are heavily labor-intensive where there is little growth in productivity over time because productivity gains come essentially from a better capital technology.

To quote Wikipedia, “… the same number of musicians is needed to play a Beethoven string quartet today as was needed in the 19th century; the productivity of classical music performance has not increased. On the other hand, the real wages of musicians (like in all other professions) have increased greatly since the 19th century.”

The bottom line is you either get less symphony, or much more expensive symphony. This seems to be holding true even as computers and information technology have marched in to these sectors. A current conceit is that digital transformation and even artificial intelligence (AI) will deliver the longed-for productivity increase. I think the jury is probably still out on that one.

But to come back on point, notwithstanding the obscure observations of Messrs Baumol & Bowen, governments have diligently assumed a productivity dividend in their public services, either implicitly or explicitly and demanding that agencies deliver the same (or more service) with less resources.

From a taxpayer perspective what’s not to like: less wasteful public servants, lower taxes even perhaps? However, I suspect what we frequently get is actually less public service rather than more efficient public service. Sometimes that is OK, particularly depending on how much government you are inclined to think is a good thing. Deregulation can be a beautiful thing.

But what if some of that enthusiastic deregulation is not so much about reducing costs or producing efficiency and productivity, but rather about assuming a higher risk profile: shifting the deckchairs, crossing your fingers and praying there is no ice-berg ahead?

This was the stuff of the GFC back in 2008 – punters were assured the financial engineering had made dubious investment products safer. But instead the ‘reforms’ had stored risk in all sorts of imaginative places, from whence it emerged with a vengeance. In another poignant example, while it is still relatively early days in the aftermath, it seems likely that with the London tower fire there is a regulatory, compliance or enforcement failure somewhere in there. The ongoing program of tower inspections seems to indicate this is a systemic issue. Costs were saved, but these ‘benefits’ were generated not by efficiencies but rather by the imposition of now obviously unacceptable risks on people who were not only not able to control them, but who were simply unaware of them.

One of the great things about money (apart from the fact that it is very handy to let you get the stuff you want) is that it allows you to compare things that are otherwise incomparable – apples with pears, airports with motorways, pensions with superannuation. Hence the term ‘bottom line’ – money lets you sum it all up and make a call – a blessing for policy decision-makers. But I suggest that risk is another common denominator which can and must be used to inform decisions, and critically, it cannot itself be reduced to money. Indeed as the GFC demonstrated, there can be risks to money itself. Figuring out how to compare risk profiles and establishing transparency about who bears what risk must be an integral, non-financial part of evaluating and making policy, as exhausting and inconvenient as that may be!

A note on the featured image:

After arriving in Sydney I lived in a bed-sit in Surrey Hills, then a far socio-economic cry from the current hipster paradise. One day on a walk in the rain a poster caught my attention, torn in half by the partial collapsed of the wall on which it had been pasted. The rain had saturated the paper and the diffuse light lent the scene a soft intensity, amounting to a compelling and slightly disturbing image. Hot-footing it back to the bed-sit I grabbed the Polaroid camera I was experimenting with at the time and persuaded a neighbor to come along and hold an umbrella over me while I captured the shots. Many years later I painted the image as shown as an element in a larger multi-media piece.

The strategic power of your brand

This blog tells the Charles Six Dot Zero ‘brand story’, and suggests that an emotional ‘brand’ connection can be a key device to achieve strategic alignment, focus strategic energy and direct strategic attention within a group or organisation.

First a story: Once as a young man living in Christchurch (NZ) I was visiting my parents in Auckland.  I was unhappy, directionless and quite possibly depressed for a variety of reasons, including a wallowing Master’s thesis and my father’s poor health – he had recently lost a leg to smokers’ artery disease.  I headed out for a walk one bleak afternoon.

Drifting through the suburban landscape I wandered into a derelict house which lay wide-open close to the footpath.  A ruinous scene seemed to echo my juvenile angst – nothing of interest.  However, as I turned to leave, my attention was caught by a flash of white on a shelf high in the desolate kitchen.

It was the edge of a dirty old tile which, when pulled down and brushed off, revealed a wonderful ceramic depiction of a lotus flower.  It was as if the universe had reached out to offer exactly the reassurance that was sorely needed at that time.  My mood lifted, I walked back to a freshly purposeful engagement with my thesis and a more supportive attitude to my Dad.  And re-invention started to produce the person who would move to Australia in a year or two. I have treasured that tile for over 40 years and the design made an interesting study as I learnt to paint some 20 of those years ago.

Casting around for an image to populate the icon space on the WordPress template for my Charles Six Dot Zero blog, that painting was an immediate candidate. A little bit of graphic design magic and the focussed image featured for this post emerged.

For me it captures the deep emotional resonance of personal reinvention.  It reinforces the power of narrative in defining purpose and it encapsulates and expresses my key values – to be calm, centred, creative and connected.  In short it sums up the ‘brand’ charles6dot0, what I want to be and do in this sixth iteration, and provides an ideal visual representation for it.

In my previous blog What does ‘strategy’ mean today? I noted that achieving intellectual agreement with strategic intent is easier said than done, and that forging the necessary emotional engagement often neglected.

The whole point of developing and articulating a strategy is to create a common direction for a group of people – something that might be termed ‘strategic alignment’, and it is essential if the strategy is to be anything more than ‘shelf-ware’. Obviously this is easiest when constructing a strategy for a group of one!

But strategy itself will probably be insufficient to engage people in groups larger than one of two.  It will need to draw on a deeper narrative about why the group exists, what the group is seeking to achieve and what it values – what you might think of as the ‘reason for being’.

Too easy.  Surely it’s obvious to members why they are there, even for quite large organisations. The temptation is to solve this query quickly; but it can actually be very difficult, because it does pose fundamental questions.

Usually it is indeed obvious to any given individual why they are part of a group and what the group is trying to achieve. It’s just that, surprisingly often, different answers are equally obvious to others in the group. Put different obvious answers in the same room and ensuing conversations are not always easy, because the obvious is, well, obvious … and people are usually emotionally attached as well intellectually invested in their views.  Social science tells us discussion is as likely to drive them deeper into their position as to persuade them otherwise.

One useful, but sometimes neglected entry point to discover this deeper narrative is the group’s brand statement – for an established organization this might be clearly and explicitly detailed; for other groups, like community organisations, it may be undocumented and less explicit, more akin to folk-lore.

Brand is much bigger than the visual identity that is the usual public face of a brand. An important element and summation point for brand development is capturing the essence of the organisation into a single and unifying statement—a Single Organising Idea or SOI.

The SOI should describe what the group wants to stand for in people’s lives.  An SOI should in a single phrase or short sentence, distilled from the promise, beliefs, values and the core focus of the group, state the unique and distinct purpose of the group, and therefore the brand

Critically this statement should communicate and connect at an emotional level.  It is not necessarily stated in terms suitable for public exposure (that can come later when developing a brand ‘tag –line’) but it must resonate with at least the majority of members.

One caution however is to avoid design by committee, something that can sandpaper away the distinctive edge of your brand thinking into blandness.  Leadership is essential in framing, conducting and concluding the necessary conversation. When it’s done, it’s done, and someone has to call it – this is not necessarily the task of the formal office-bearers of the group, who may well contribute best by encouraging and empowering ‘situational leadership’ – probably the topic for another blog.

The behaviour of individuals associated with the group is a critical manifestation of the brand. So the brand should be developed and used to set the behavioural cue for everybody in the organisation.  It can and should be a catalyst for thinking, planning and action, thus becoming a strategic organising device to focus energy, alignment and attention within the organisation on its mission, intent and vision.

In one way or another, a brand for the group will exist, and to succeed any strategy must be consistent with this ‘reason for being’.  It is essential for strategy to link with a group identity and to resonate emotionally as well as intellectually with its promise to the community, what the people behind it believe and aspire to, the values it represents, and its core focus of action.

Alignment with these brand attributes will help communicate a strategy with consistency, integrity and longevity and assist to deliver the outcomes or capabilities it requires. Perhaps think of this as a positive form of ‘group-think’!