Regulating your way into paradise?

We like to think that a sustainable economy will be inclusive and prosperous – some seem to think that somehow an enhanced regulatory intervention is required to tie economic activity to sustainable practice. However for me it is difficult to think of a better mechanism to achieve that than marketplace processes. Only the market, subject to reasonable democratic control, can process the information, generate the wealth, encourage innovation and manage the risk necessary to sustain a complex society. You are never going to produce excellence, originality or innovation just because a regulator thinks it’s a good idea. It may well be that there is a clear dynamic and perhaps necessity for the rewriting of the social contract between corporations, markets and society. It has to be fervently hoped that our democratic institutions and processes are even halfway up to the task.

Advertisements

Over the last couple of weeks focusing on fitness and networking outings has again distracted from blog writing. Usefully so, because balance is critical, something confirmed in the pursuit of Tai Chi.

One networking event / learning opportunity I went to recently was a public forum on ‘Building a sustainable economy’ put on by the Centre for Policy Development. I was pleased to catch up with a couple of people who I had not seen since the early noughties and my time at CHOICE.

High above the city at Level 40 of  Governor Macquarie Tower, in the finely appointed offices of law firm Minter Ellison, we listened to serious people discussing serious issues around the questions of socially inclusive, environmentally sustainable and economically viable futures, seriously. There was talk of the need for re-imagining the social contract for corporations and business in the transition to a sustainable economy, in which the roles of social, environmental and economic capital are recognised and balanced.

It was particularly interesting to hear Geoff Summerhayes, an executive board member of the Australian Prudential Regulation Authority (APRA), speaking about how the transition to a low-carbon economy is in motion, and that local companies can opt to float with the transitional current or fight against the rising tide.

APRA is taking seriously the need for insurers and other custodians of funds to factor in the risks of climate change to their deposit holders. Their foray are into this category of risk was flagged in a February 2017 speech by him: Australia’s new horizon: Climate change challenges and prudential risk.  Apparently this did not meet with universal approval and there was suggestion at the time that APRA may have been moving outside it’s regulatory remit.  A familiar problem for regulators in a difficult space seeking innovative directions.

There was a general sense in the panel discussion of industry and regulators making progress irrespective of, and to a large extent despite, the operation of politics as usual Some of the questions from the folks assembled on the 40th floor suggested that somehow an enhanced regulatory intervention is required to tie economic activity to sustainable practice. It was not entirely clear what regulatory body might be suitable vehicle for such intervention. But of course, that is only a preliminary and quite shallow problem in the proposition, which led me to a couple of reflections which I shared in conversation over drinks and canapes (I do love  a  good law firm hosted seminar!) after the panel discussion and Q&A.

Firstly regulators in today’s world cannot think of themselves as somehow ‘outside’ the system, looking in – that is the conceit of supervision. In an overwhelmingly networked world regulators of any domain a very likely to find themselves embedded within the network; as a privileged node with special resources and a necessary stance of independence but nevertheless interlinked and certainly with no monopoly of knowledge.

Secondly, and related to that final point, however well-intentioned, you cannot effectively regulate what you do not understand.  That takes investment and resources – which tend to be thin on the ground for the average reglator! It also means knowing what you do not know, and many contemporary systems are so complex, and evolving so rapidly, has to be literally beyond comprehension – even by the direct industry participants, let alone supposedly aloof regulators sitting above the fray.

This is not a counsel of despair –  there are effective avenues of intervention available that recognise and draw on the premises of complexity theory.  These go beyond  the simple  proposition of unintended or perverse consequences and recognise the essential primacy of self-organization in complex systems. Hence an emphasis on the importance of self-regulation and the likelihood or perhaps even inevitability of sudden shifts (think Butterfly Effect, cascades and tipping points). It is also evident that complex systems are fragile, and the more complex they are the more fragile they are.  Fragility requires increasingly large investments of information and energy to keep the complex system functioning and to reap the rewards of that complexity a potentially vicious circle for all concerned.

There was further commentary from the floor of the meeting which suggested an even deeper disillusionment with the viability of corporate moves to sustainability. There was a feeling among some that a sustainable economy was beyond the capacity of markets to deliver and that we were witnessing the failing of capitalism. To some extent these perhaps echoed the views recently reported of the former Greek finance minister Yanis Varoufakis, who has claimed capitalism is coming to an end, in his view because it is making itself obsolete with the rise of giant technology corporations and artificial intelligence.  “And then what happens?” he is reported to have said – “I have no idea”, and of course that’s the thing, isn’t it, none of us do.

It’s fairly easy to conclude that the messy market system is failing because it’s not doing it exactly what you want it to do. At that point people’s thoughts turn inexorably to ‘command and control’ – let’s just tell people what’s good for them and what to do. I’m reminded of a timeless remark by one of the first systems analysts that I worked with 35 years ago: he would say “We wouldn’t have all these politics if people would just do what I say”.

We like to think that a sustainable economy will be inclusive and prosperous – however for me it is difficult to think of a better mechanism to achieve that than marketplace processes, subject to reasonable democratic control. Only the market can process the information, generate the wealth, encourage innovation and manage the risk necessary to sustain a complex society.

20171201_144911-01.jpeg

One thing I am certain of is that you cannot simply command yourself to be wealthy. That’s a difficult trick for an individual, let alone a complex society in a global context. It is a rule of thumb for regulators that you cannot create good behaviour by regulatory fiat. Markets can be messy and periodically interventions are required in an attempt to focus the minds of participants on the greater collective good. There need to be  effective mechanisms  to address  complaints  resolve disputes  and settle grievances.  That can certainly include trying to encourage good behaviour through education, guidelines and encouraging best practice – but you are never going to produce excellence, originality or innovation just because a regulator thinks it’s a good idea.  Actually in my experience and from my time spent working and advocating in various regulatory context, regulators are as often moved by markets and the expectations of society and culture in general as the other way around.

Basically the goal to which regulatory energy is best directed is the mitigation of harms – that is stopping bad stuff rather than trying to induce the good stuff.

Those that command usually do just that and that has the real potential to deliver significant adverse distributional effects, far worse than free market excesses (which is why the prevention of monopoly is one unimpeachable goal for regulatory intervention).  In these circumstances, channeling my friend the systems analyst, democracy may well prove inconvenient. Winners have a nasty habit of deciding who prospers, who the losers will be and what penalty they may pay – and indeed probably what counts as ‘sustainable’, and for whom.

This may well be a classic case of be careful of what you wish for, unless perhaps you expect to be an unprincipled winner.  It may well be that there is a clear dynamic and perhaps necessity for the rewriting of the social contract between corporations, markets and society.  It has to be fervently hoped that our democratic institutions and processes are even halfway up to the task, since the zombie which is likely to arise from the grave of capitalism will probably be a pale imitation of that which was joyfully interred, but yet have a potential to be far more fearsome.

Creation: line & form

Take a wander through my creative thinking as I try to complete at least one charcoal drawing on paper each day over the last fortnight or so – and have a look at the results.

This blog post is a little delayed. I enjoy writing and so I have been enthusiastic about keeping my commitment to a Charles 6.0 post every week – doing so has engaged my creative energy more than adequately. But (and there is always a ‘but’ isn’t there) I have found that this work has to some extent been absorbing creative energy from other things I have wanted to pursue.

A specific example is my wish to engage further with drawing – as I noted in an earlier blog, ‘Back to basics’, I have been going along to a U3A drawing group every fortnight for the last couple of months, revisiting the fundamental skill of free-hand drawing.  I noted that I found it a bit challenging – my engagement with this skill was longer ago than I first thought – but that I would persist , strongly feeling the need to relearn the basic skills of seeing and constructing images by hand.  While I have engaged happily with the group now, to improve I really needed to do more.

So a couple of weeks ago I decided to prioritise drawing over blog writing, to try to make time to do at least one drawing each day, with  a view to sharing the results in a blog somehow. I have focused on the most basic drawing technique using black charcoal on paper, using objects scattered about the house, some of which I have used for creative inspiration in the past.  For a number of them I have gone on to use the drawn image in creating digital collage with photographs from the streets, adding a layer of interest and complexity.  Other events have conspired to divert my focus over the last fortnight, but on as many days as possible I have attempted a drawing.  These are what I will focus on sharing in this blog, interspersed with digital play with some of them, and also a few random notes or jottings accumulated on my travels during the couple of weeks -apologies for more or less adequate segues 😉

20171109_090307.jpgSo to my first drawing, of a retro china ballerina figurine which offered the opportunity to explore solid object modelling – a black figure in a pink dress,garnered from a op shop a couple of decades ago.

The next drawing was of a less substantial object, being one of a pair of brass butterflies, with filigreed detail best gestured at rather than precisely rendered.  I like the idea of something as delicate and insubstantial as a butterfly being expressed in a material as solid as brass and I had captured a rather interesting light effect  on the pair a couple of months previously that echoed that thought on transience…

 

On the subject of ephemera, I saw Facebook an “occasional address” from comedian Tim Minchin for a graduation ceremony at his old Uni, The University of Western Australia – gosh, I just noticed that his robe echoes a butterfly!

While witty and amusing, his nine life lessons resonate enormously with me – in summary:

  1. You don’t have to have a dream
  2. Don’t seek happiness (keep busy and make someone else happy, and you may get some as side effect)
  3. Remember, it’s all luck
  4. Exercise
  5. Be hard on your opinions (Identify your biases, your prejudices, your privileges)
  6. Be a teacher (Even you are not a teacher, be a teacher)
  7. Define yourself by what you love (be demonstrative and generous in your praise of those you admire. Be pro-stuff not just anti stuff)
  8. Respect people with less power than you.
  9. Don’t rush (there is only one sensible thing to do with this empty existence, and that is, fill it).

20171109_090522.jpgBut back to the drawings – next I attempted a rendition of a china Buddha figurine which once again I picked up in an op shop a long while ago.

While not formally an adherent, from what little I understand I am sympathetic to many Buddhist tenets, framing the possibility of a joyful embrace of the emergent universe – echoed in Tim Minchin’s riff on our ’empty existence’. Such was the intent of my subsequent digital employment of the sketch …

 

20171109_080549-01.jpeg

One lunchtime I went along to  the presentation  by Mark Deuze, Professor of Journalism Studies at the University of Amsterdam. It was an engrossing presentation – an interesting and engaging hour or so well spent. He spoke about journalism start-ups & his forthcoming book, Beyond Journalism, which was the basic theme of his talk.

20171115_111047.jpgHe described the operating environment for journalists as being ‘liquid’, meaning that conditions are changing faster than ways of acting can consolidate into habits and routines.  He was essentially describing the hollowing out of institutional journalism. He described the situation of media workers in terms of precariousness or precarity; that is not knowing and not having control over what will happen next.

There was discussion of the journalist as a DJ, able to mix and match roles and value systems, of portfolio careers and cross subsidised work styles, of journalists as individual brands, of how contemporary media workers are never ‘not at work’.  Coincidentally, the next day the Productivity Commission published a report is called “Shifting the Dial” which looked a range of public services in Australia, but had some damning observations around higher education. While acknowledging the important role universities play in society it also pointed out that the way they are funded and operated is leading to less than ideal outcomes for students.  The report I read noted that:

Only 70 percent of graduates are employed in full-time work. That’s the lowest level since records began in 1982, and we’ve been on a steady decrease for the past decade. Also, nearly one-third of the graduates who are employed are working in jobs that don’t require their degree.

But wait, there’s even more! Graduate starting salaries have been declining when compared to average earnings, which means degrees are becoming less valuable at the same time the cost of study is increasing.

I found this profoundly interesting in the context of Tim Minchin’s address and given the observations I had made in my recent blog Thinking about education, work & AI, with the long story short  view that the world of information (how it is stored, distributed, navigated and utilised) has changed immeasurably over the last couple of decades, and the tertiary sector has struggled / is struggling to keep pace and stay relevant.

I guess we might just have to armour-up … I enjoyed sketching the suit of armour figure that I have treasured as a birthday gift from my eldest son many years ago. If it looks a little robotic, I guess that guys in armour actually did look a bit like robots.

20171115_110615.jpg

My next subject perhaps also holds a shield – this was a kitsch china figurine I found by the roadside, which I painted into more basic colours and augmented with other found objects to create a sculptural assemblage that engages enigmatically with time and change.

20171109_090240.jpg

I blended the drawn image with a recent photograph of weathered street posters to good emergent effect, while I have used the assemblage in photographic studies from time to time as well …

 

Nearing the end of the report into my adventures with charcoal, I drew this small wooden Balinese bust that I  picked up somewhere on my travels.  Perhaps an earlier photographic study captured something of the calm and reflective air it conveys …

 

… while the experimental combination of the sketch with a image of a dilapidated street poster  produced an image, that for me, somehow provides a powerful abstract summary of what this blog is trying to do and say.

20171109_090209-01.jpeg

 

 

 

 

Thinking about education, work & AI

The world of information (how it is stored, distributed, navigated and utilised) has changed immeasurably over the last couple of decades, and the tertiary sector has struggled / is struggling to keep pace and stay relevant. An important challenge is sustaining the commercial pressures of running large institutions, exacerbated by the accelerating end of the monopoly on knowledge by traditional education institutions. A key question is how participating in university education can remain primary or indeed useful in equipping students to participate in the workforce. ‘Just-in-time’ education seems like the best contemporary strategy. Perhaps the future utility of might be best seen as equipping people with the fundamental literacy to be able to devise and guide a learning path as closely aligned to their career and vocational aspirations as possible – ‘Literacy 4.0’. Smart people in future will need to understand and work across skill clusters and the various dimensions of the smart society – smart educators will help them …

This blog has its origin in a workshop I attended week or so ago about AI and the future of work – you may recall a blog from me on that topic a short while ago.  One of the exercises we undertook was a group conversation about what university education might look like in 2025. There was a fair bit of lively discussion about technology, collaboration and equity of access, but I must say without any definitive insights.

I think one participant hit the nail on the head when they posed the question; will there actually be universities in 2025. The obvious answer is ‘Yes’, but really the question was rhetorical, because while universities will in all probability continue to exist as institutions with that name for decades, the role, constitution and structure of the university is changing, and will continue to change.  It is doing so not just under the pressure of technology but from social, cultural and economic developments.

Tertiary education is something I have thought about and discussed with many intelligent people over the years.  The main things I got from tertiary study were the importance of structuring my thoughts and learning how to learn – both pretty much learnt by doing.  I guess long story short my view is the world of information (how it is stored, distributed, navigated and utilised) has changed immeasurably over the last couple of decades, and the tertiary sector has struggled / is struggling to keep pace and stay relevant.

A key concern, of course, is the question of how participating in university education can remain primary or indeed useful in equipping students to participate in the workforce.  There was acknowledgement and some excitement around the current move in university circles to offer micro-credits or if you like ‘à la carte’ selection of elements from their portfolio of offerings. The sensible idea is that students can fine tune their learning as close as possible to their specific needs.  This comes close to the position I have arrived at; that ‘just-in-time’ education seems like the best contemporary strategy – a little bit at a time, focussed on a tangible goal or to take the next step or getting that necessary credential.  The days of imbibing a large body of knowledge early and living off that for years seems long gone (if it ever really worked).

I reckon there is no wrong pathway but there are multiple pathways.  While I respect people who commit to even extended academic journeys, my feeling is that an integrated work and learning pathway will track better in an environment of uncertainty about work futures.  One obvious example is the apprenticeship model.  In the past this worked similarly in essence for the degree model, in the sense that the apprentice/student gained a body of knowledge that was meant to last a lifetime of employment.  One important development for both over past decades has been the recognition of the need for constant skill maintenance, however this still operates within the original silo. What is increasingly necessary is the ability to chart a course between learning models and across spheres of employment, to create an individualised trajectory of knowledge acquisition and value creation.

University of Melbourne has launched a brand campaign showcasing what it regards as its distinctive curriculum, the ‘Melbourne Model’.  Its YouTube video presents the idea of education that equips students with world knowledge, so that they can adapt and be ready for every possible future.

As slick as this is, it does strike me however that an important challenge with this approach is sustaining the commercial pressures of running large institutions, exacerbated by the accelerating end of the monopoly on knowledge by traditional education institutions – universities aren’t the only ones using YouTube!  How often have you heard someone say, ‘Oh I learnt how to do that from a YouTube video’?  The ‘University of YouTube’ might serve as an umbrella term for the ready and instant availability of knowledge and ‘how-to’ instruction on the Internet.  Not to mention the access to the vast storehouse of human knowledge Google (and other search engines) have given over the last couple of decades, and the power of social media to foster the rapid emergence of communities of interest and practice to share and develop knowledge.  Sure there are quality and trust issues, but that doesn’t stop people successfully using these information resources all the time, in both their personal and professional lives.

SignsAreEverywhere.JPG

In some ways it’s perhaps analogous to the problems facing subscription television – a classic fixed cost versus variable income problem.  As their offering is increasingly unbundled and contested by ‘watch only what you want’ streaming services, maintaining the integrated network infrastructure becomes increasingly difficult. Similarly universities require significant financial logistical and educational agility to sustain a coherent offering from a swarm of micro-learning opportunities.

One key aspect of the modern knowledge equation is the advent of AI and big data, and it will be interesting to see how the application of data analysis to educational design and experience will play out. AI will not be monolithic, various actors and agents will contend and contest and are unlikely to be perfect. Humans are likely to be the adults in the room for quite some time to come. In fact it occurs to me that governance will be a growth area in AI-world along with curation and editing of AI-based products to best fit human needs. One very interesting area will be learning and developing ways to interface with AI based systems – the common office screen and mouse systems are likely to go the way of the command line DOS prompt of old, and perhaps work interfaces will come to resemble contemporary digital game environments?

Students will definitely need to get accustomed to an increasingly data dense educational environment. A positive outcome could well be better management of diversity and complexity in catering to individual student needs and wants, enabling a better matching of learning environment, methods and partners. Perhaps AI assisted ‘adaptive education’ could assist managing these multiple pathways?

In this context it was interesting to note the 2017 report from the AI Now Institute at New York University, focused on the use of AI in government and the law.  It suggests that “the design and implementation of this next generation of computational tools presents deep normative and ethical challenges for our existing social, economic and political relationships and institutions.”

It cautions that “Core public agencies, such as those responsible for criminal justice, health care, welfare, and education should no longer use ‘black box’ AI and algorithmic systems”, since difficult decisions need to be made about how we value fairness and accuracy in risk assessment. It is not merely a technical problem, but one that involves important value judgments about how society should work. These concerns, expressed particularly about the legal system in the report would seem to be as applicable to educational institutions, which would seem as susceptible to perpetuating AI-driven harm as any other.  Thinking about this leaves me to an observation that education can largely be seen as a lagging institution: that is rather than driving our massive social and economic shifts, it is essentially driven by them, adapting and configuring it’s offerings to suit the times.

One notable exception that observation which occurs to me however is the fundamentally role of literacy as a social and economic enabler.  Perhaps that is one way of conceptualizing the future utility of education rather than providing intellectual toolbox or skill set it might be better seen as equipping people with the fundamental literacy to be able to devise and guide a learning path as closely aligned to their career and vocational aspirations as possible. Dr Josh Healy Senior Research Fellow, Centre for Workplace Leadership, Faculty of Business and Economics, University of Melbourne writes about what researchers are calling the new literacies, what he terms ‘Literacy 4.0’.   Educational institutions and educators will need to consider how they can anticipate to the new and changing lattice of options and adapt themselves to best assist their students to navigate that environment.

The Foundation for Young Australians (FYA) spells out where that literacy might be used (without using the term) when it discusses 7 ‘clusters of work’ – skills that are transferable across jobs – to help young people (and I would suggest people of any age) navigate the new work order. Their conclusion:

By understanding the skills and capabilities that will be most portable and in demand in the new economy, young people can work to equip themselves for the future of work more effectively. Our mindset needs to shift to reflect a more dynamic future of work where linear careers will be far less common and young people will need a portfolio of skills and capabilities, including career management skills to navigate the more complex world of work.

The FYA analysed skills requested by employers across 2.7 million online job advertisements posted over the past two years and the occupations were then grouped based on whether employers demanded similar skills from applicants. These clustered in the following seven groups (take a look at the report to explore them further):

  • ‘The Generators’
  • ‘The Artisans’
  • ‘The Coordinators’
  • ‘The Designers’
  • ‘The Technologists’
  • ‘The Carers’
  • ‘The Informers’

A glimpse of the future smart society that these clusters of skills might be used in is given a World Economic Forum (WEF) piece titled: The society of the future looks nothing like you might imagine.  A ‘smart society’ is defined as one where digital technology, thoughtfully deployed by governments, can improve on three broad outcomes: the well-being of citizens, the strength of the economy, and the effectiveness of institutions.  A natural group of countries to use as role models was the Digital 5, or D5, nations, representing the most digitally advanced governments in the world. The group comprises Estonia, Israel, New Zealand, South Korea, and the UK – sadly Australia does not seem to rate a mention.

The D5 nations are used by the WEF to define a global benchmark for a smart society organized them so that each indicator could be classified under one of 12 broad benchmark components. These broad components are:

Citizens/People Components:

  • inclusivity,
  • environment and quality of life,
  • state of talent and the human condition,
  • talent development.

Economy Components:

  • global connectedness,
  • economic robustness,
  • entrepreneurial ecosystem,
  • innovation capacity.

Institutions Components:

  • freedoms offline and online,
  • trust,
  • safety and security,
  • public services.

It would be an interesting exercise to map the seven FYA skill clusters more precisely across these WEF smart society benchmarking components – something for another day perhaps.  In any event, smart people in future will need to understand and work across both these dimensions – smart educators will help them …

20170912_173021.jpg

Croquet conversations

Sometimes when ask what I am doing in retirement, I say I am seeking to have ‘interesting conversations’. For this blog I have taken a different approach to usual, weaving various fragments of recent conversations on the sidelines of the week-long Manly Croquet Club Seabreeze tournament into a single fictionalized exchange

By the lawn at the Manly Croquet Club Seabreeze tournament:

It’s taken us ages to drive here, there was an horrendous accident on the parkway that held up traffic for over 30 minutes – it took us an hour and a half to get here this morning.

Yes, we were faced with a three bridge problem coming from Marrickville: the Anzac Bridge the Harbour Bridge and the Spit Bridge each with their own particular idiosyncrasies.  So rather than drive to Manly every day, we decided to stay over for the week of the tournament and rented an Airbnb granny flat just on the other side of the golf course – about 20 minutes’ walk to here.

20171014_082215.jpg

Wow, that’s a good idea!

Yes, we figured that people actually used to go to Manly for the holidays – back in the day, before they invented Tuscany…

How have you found it … Airbnb that is?

It was a great decision – we’ve called it our ‘cruise on dry land’ and it’s worked out brilliantly. The young couple who own the house have a young toddler – actually one thing that might have bugged me in the past, but that I’ve enjoyed in grandparent mode, has been the literal pitter-patter of little feet overhead.

It seems to me with all this development adding more and more people, generating ever-increasing traffic that Sydney is drowning in its own prosperity.

“Into every life a little rain must fall” as my mother often said as she counselled resilience in the face of adversity.  You know, watching this very welcome spot of rain sprinkling on the dry croquet lawns, it occurs to me that it can also true in the positive alternative – that is, whatever adversity one might face, a little relieving rain will ultimately fall.

Yes, but it’s hard to see where the sprinkling of rain to ease the pain of the Sydney traffic might come from – it certainly a paradox that the more capacity and infrastructure we add, the more traffic we seem to generate.

I agree, that certainly seems the case. You know, I fully get the idea that the best solution to congestion is actually congestion, eventually it will be self-managing.  But that is completely politically unsaleable, since everyone wants their particular connection problem solved, which walks us back into the problem.

A wicked problem!

Indeed! Perhaps technology will help us through better traffic management and perhaps congestion charging by location. Of course technology in the future often doesn’t get such a positive spin – Do Androids Dream of Electric Sheep?

[Puzzled look]

The original sci-fi novel that was inspiration for the movie Blade Runner.  I am really looking forward to seeing the sequel that was released recently.

Is that so?  I’ve not seen the original movie.

It’s well worth seeing – it seems to me to be quite an important work, not exactly a prediction but it looked at important and pressing issues, which are still relevant today, maybe even more than when it was first released.  A bit like what Mark Twain said about the past – history doesn’t repeat but it rhymes. Maybe in the same sort of way good thinking about the future won’t predict it, but is likely to rhyme with it?

I don’t think that I’m that march in tune with modern technology – it all seems a bit much to keep up with frankly.

I don’t think that you sound as much of a techno-sceptic has the woman at my U3A drawing group. When asked if she had received the newsletter said ‘No!’  Told that it was sent as an email attachment in a PDF file, she observed with considerable vehemence that if it was not on paper she did not want it and that in any case that she did not have email. I must admit I was impressed and surprised by such a closed mind in a group that is supposedly engaged in creative practice.

Well yes, I wouldn’t count myself as such a digital refusenik – I wonder if they are digitally disenfranchised or just living with their heads in the sand?

I am reminded one point made an interesting sci-fi book I finished reading recently – by Iain M. Banks called Excision – about what he called ‘the dependency principle’.  He said however smart you are (and here he was talking about hugely advanced AI Minds) you should never forget where the OFF switch is located. He makes the point that base reality remains essential – the electricity that makes computers work and the basic physiology that enables the human mind to functions.  Lose that base reality and you lose everything constructed on top of it! So there is a danger in being dependent on technology …

So there’s nothing wrong with staying in touch with day-to-day reality … for a different take on what laughingly passes for reality, I just finished Salman Rushdie’s Midnight’s Children, which was a fantastical cornucopia of detailed insights and observations.  To me there is a compelling comparison between this work and The Tin Drum by Gunter Grass. I don’t see it as derivative but rather a member of the same genre, whereby a preternaturally gifted child observes and influences the great tapestry of history – Saleem by virtue of his telepathic gifts and acute nose and Oscar with his glass shattering voice and insistent drumming.

Wow, interesting … I guess reading and appreciating literature is perhaps also a form of conversation, both with the author and with yourself, as well as then being material for conversation with other people. Something I’ve done in the few months since retiring has been to re-watch all seven seasons of the television series called ‘The Shield‘. Watching it in its entirety across a reasonably short time-frame, I found it quite Shakespearean in the depth of the drama of a man acting very badly while striving to fulfill good intentions of comradeship and family – perhaps even better than the legendary Breaking Bad.

Really? I don’t actually watch much television myself.

We don’t watch broadcast television much at all anymore either but I think that the long form television series, delivered by pay or DVD or streaming, is arguably the preeminent literary form the first couple of decades of the 21st century. Just like the serial works of Charles Dickens and Thomas Hardy became literary icons of the 19th.

Hmmmm, maybe … anyway how is the croquet?

I’ve been making a few tournament updates on Facebook illustrating each with a photograph of a flower from the fine garden beds surrounding the lawn.

IMG_20171017_085613.jpg

When I mentioned this to someone from the club, the response was that the gardeners who put in a lot of work would be very pleased for the acknowledgment of their efforts.  I also mentioned the croquet-themed men’s room sign I employed for one update, and was quite staggered that a long-standing club member had never noticed such unique signage – same for the women’s by the way.

 

Interesting what people don’t see. How’s your play been?

Oh, I managed make a break with 9 hoops in a row in my last game – lost the game but that made it worthwhile.

Congratulations! Becoming one with the mallet …  I’ve had a few shorter breaks, but my current specialty seems to be making my opponent play their worst game … sometimes I win, sometimes they do, but often low scoring, scrappy games. As I commented to one of my opponents, it’s like being stuck in bronze hell…

Oh well, such is life, as Ned Kelly told his gang!

Not such a wicked problem I guess. Oh, I think that’s my turn on the lawn – been good talking to you.

Likewise, good luck!

IMG_20171016_202553-02.jpeg

The specialist generalist

Over the years I have come to understand that my real aptitude lies in helping to explore and define problems rather than to craft specific and technical solutions. Do this well, as I have at various stages, and you’ll find yourself in positions where the latitude to sit and think is extended significantly and can in fact become the accepted reason for your continued employment and contribution. At that point you have actually moved beyond simply being a simple, practical generalist and have started to engage with the role of the specialist generalist. This is someone who, rather than simply bringing together a variety of specialties, works in the world of the complex and the unknown, to define and appreciate problems and then to architect the shape of possible approaches and solutions. I must say I am enjoying the freedom of Charles 6.0 to suit myself as to those problem domains and chaotic edges in which as a specialist generalist I choose to dwell.

Since ceasing full time work, and starting my Charles 6.0 transformation, I have met a number of people (some more significantly advanced in years than my own) that continue employment on a consulting or contracting basis. It has sometimes been suggested that perhaps that’s something I would like to do. Quite apart from being pretty fully occupied without having any work-like obligations, one reflection on this has been that these people normally have a highly specific and singular expertise that is valued in the marketplace, such as database programming, construction engineering or town planning.  The world of the consultant contractor is the world of the dedicated specialist.  Quite reasonably most clients are looking for someone to undertake a specific task with well-defined outcomes – that way they know they will get at least an approximation of what they are paying for.

At various times during my working life I have worked as a self-employed consultant/contractor. To be completely frank I’ve never really been all that good at it.  While I easily discharged the usually IT related tasks (such as application coding or database design) entrusted to me by various clients, I generally wanted to do more and often found the focused, repetitive aspect of the work they wanted me to specialise in frustrating and somewhat unfulfilling – when I’ve done something once I generally want to solve a different problem or acquire a fresh skill. The basic problem is that I’m interested in too many things.

Over the years I have come to understand that my real aptitude lies in helping to explore and define problems rather than to craft specific and technical solutions.  However, usually people either feel they have a good handle on what needs doing or they lack the trust necessary to commission someone else to explore the problem space.

By and large there is also an inclination to ‘rush to solution’ – there is little appreciation of the art and skill of sitting with a problem long enough to understand its true demands and dimensions – which quite frequently are more or less different to the immediately presenting issues.   Newsflash: that is not a proposition easily sold into a competitive market place – there are not many clients willing to pay someone to sit with a problem – they want them solved – ASAP!

I actually found the best place to practice that particular art is as a full time employee – oftentimes you can layered the necessary time spent sitting and thinking in among all the busy work that employers seem delighted to visit upon their workers. It is here you can cultivate the position of the generalist employee, easily deployed to various tasks but sometimes lampooned as the ‘jack of all trades that is master of none’.  However as the Wikipedia entry about that saying notes, such an individual may be a master of integration, knowing enough from many learned trades and skills to be able to bring the disciplines together in a practical manner – what I would call a practical generalist.

Do this well, as I have at various stages, and you’ll find yourself in positions where the latitude to sit and think is extended significantly and can in fact become the accepted reason for your continued employment and contribution. At that point you have actually moved beyond simply being a simple, practical generalist and have started to engage with the role of the specialist generalist. This is someone who, rather than simply bringing together a variety of specialties, works in the world of the complex and the unknown, to define and appreciate problems and then to architect the shape of possible approaches and solutions.

One conceptual tool I have found useful to frame complexity in this context is what is commonly known as the ‘Stacey diagram’, so named after the British organizational theorist and Professor of Management Ralph Douglas Stacey. It has apparently been frequently adapted by other writers, as noted by Wikipedia often in ways not consistent with Stacey’s – to the point that apparently ‘he dropped the diagram and now argues against its use’.  I am as guilty of appropriating and extending his original thinking as anyone!  But I find it incredibly useful as framework for analysis and thought, and so I have sketched my own take on it, as illustrated here.

20171003_100112.jpg

There are two axis to the diagram – Uncertainty and Disagreement:

  • The horizontal x-axis is Uncertainty. When an Issue or decision is close to certainty it is because cause and effect linkages can be determined.  This is usually the case when a very similar issue or decision has been made in the past, you can then use past experience to predict the outcome with a good degree of certainty. The other end of the certainty continuum is ‘far from certainty’. This is when the situation is unique or at least new to the decision makers.  The cause and effect linkages are not clear.  Using past experiences is not a good method to predict outcomes in the far from certainty range.
  • The vertical y-axis is Disagreement. This measures the level of agreement about an issue or decision within the group, team or organisation.  The degree of agreement on what should be done is an important factor in determining success.

I have found a very useful and succinct exploration of the Stacey matrix in relation to art of management and leadership, on this GP training resource archive.  It maps various forms of decision making onto the matrix: Technical rational in the ‘simple’ region which is close to certainty and close to agreement – in terms of this blog the place for the specialist; Political for the area having a great deal of certainty about how outcomes are created but high levels of disagreement about which outcomes are desirable; Judgmental for the opposite set of issues with a high level of agreement but not much certainty as to the cause and effect linkages to create the desired outcomes.

Political and Judgmental for my purposes here are the realm of the ‘practical generalist’.

And then there is the Complexity zone which lies between these regions of traditional management approaches and chaos and is the natural home of the specialist generalist.

20171003_101720.jpg

A few observations on what is required to work as this close to the edge of chaos – for it to be a ‘zone of opportunity’ …

  • Be prepared to have absolutely no idea what you’re doing much of the time!  The qualification is to be able to ascertain rapidly what needs to be known and to acquire that knowledge rapidly rather than to have a stored repertoire of specialist knowledge to hand.
  • Work on the basis of principles rather than rules. I like this recent post I found on LinkedIn – ‘Burn Your Rule Book and Unlock the Power of Principles’, which observed “Principles, unlike rules, give people something unshakable to hold onto yet also the freedom to take independent decisions and actions to move toward a shared objective. Principles are directional, whereas rules are directive.”  But a specialist generalist needs to be prepared for uncertainty even here: paradigm shifts in terms of the set of principles to be applied in a given space, to find space for innovation and novel principle to emerge.
  • Be a systems thinker – I like the following illustration of the Tools of a System Thinker (attached to a tweet by @SYDIC_ITALIA Chapter Italiano della System Dynamics Society Internazionale – no further reference to acknowledge). However a specialist generalist must be an open-ended systems thinker, sensitive to emergent systems and to proto-systems at the edge of chaos.  You cannot insist on systems at all costs, but need to utilise the insights systems thinking can generate.  Be a network systems thinker, value the connections in the models you will perceive and generate as well as utilizing networks of skill and knowledge around the problem space.

IMG_20170915_084751.jpg

It took me a long time to recognize and name myself as a ‘specialist’ generalist.  It is a very difficult and demanding role, one that is difficult to sell and articulate, but one which can deliver dividends with multiplier effects well beyond the contributions of specialists and practical generalists, since it is the role that seeks innovation, requires agility and rewards resilience.   That said, in the end with respect to my specialist computer-related skills, I decided to employ my abilities to my own ends rather than to try to meet the often poorly articulated and often contradictory needs of clients, be they internal or external.  I must say I am similarly enjoying the freedom of Charles 6.0 to suit myself as to those problem domains and chaotic edges in which as a specialist generalist I choose to dwell.

20170604_151702-01-01.jpeg

AI and future of work

‘Work’ is a fundamental shaper of everyday life. There is widespread agreement that AI technology is making possible the automation of a wide range of non-routine cognitive tasks, with obvious consequences for those currently doing those tasks. The orthodox economics proposition that a general equilibrium will prevail, however the live and urgent question today is whether this time it is different … maybe the historical pattern of compensating job creation has broken. Planning for how society, nationally and globally, might need to respond would seem like a useful insurance policy.

Last week’s blog explored Artificial Intelligence (AI) technology – broadly defined – and this week I am looking at the implications of this technology for the future of work. I encountered AI tech first when researching the impact of technological change on the employment context and prospects for visually impaired people while at the Royal Blind Society (RBS) in the early 80s.  I observed that it was a time of considerable technological excitement, but that there was also considerable anxiety about the impacts of automation on jobs.

It’s no new thing for people and society to stress about the impact of changing technology on jobs and employment – apparently Queen Elizabeth I refused a patent for a knitting machine because of the poverty it could cause and in the early 1800s the Luddites, poster children for technological resistance, destroyed weaving machinery.  It is worth noting that it is a misconception that the Luddites protested in an attempt to halt progress of technology but rather were trying to gain a better bargaining position with their employers.

The late seventies – early eighties was a time of relatively high unemployment and the Australian debate was framed by a Senate Committee of Inquiry into Technological Change in Australia, culminating in the Myers Report.  Many of the headlines of those times would not be out of place today – perhaps the more things change, the more they stay the same?

The conclusion of the report, and outcome we can see today is that while micro-processor driven technology innovation might destroy jobs it would also create them, and usually in greater number and with better pay.  In defense of his Committee’s report Prof Myers gave a good summary of the orthodox economics proposition that a general equilibrium will prevail:

When technological change lowers the price of goods by cutting labour, families will spend the money so saved on other goods and services and in doing so will generate jobs often in quite unrelated areas of the   economy. (AFR 30 Sept 1980)

However, the live and urgent question today is whether this time it is different … maybe the historical pattern of compensating job creation has broken. The critical debate is whether the past pattern of technological change generating wealth and supporting work as a distribution mechanism will hold or whether there has been some kind of de-coupling. The case for such a de-coupling is persuasively and succinctly argued in this brief video piece that explores the consequences of an increasingly automated workforce, worth watching: “Humans Need Not Apply” by C.G.P. Grey.

In 2013 Carl Benedikt Frey and Michael Osborne used a functional analysis to show that with the availability of big data a wide range of non-routine cognitive tasks are becoming computerisable, and they estimated “that sophisticated algorithms could substitute for approximately 140 million full-time knowledge workers worldwide”.   The Bank of England built on this in a study reported by their Chief Economist Andy Haldane.  Using the Frey and Osborne methodology, the Bank did its own exercise for the UK to produce a broad brush estimate to suggest for the UK up to 15 million jobs could be at risk of automation. In the US, the corresponding figure would be 80 million jobs.

Lending support to the idea that this time it may be different, Haldane noted:

… new-age machines will be thinking as well as doing, sensing as well as sifting, adapting as well as enacting. They will thus span a much wider part of the skill distribution than ever previously. As robots extend their skill-reach, “hollowing-out” may thus be set to become ever-faster, ever-wider and ever-deeper.  As digital replaced analogue, perhaps artificial intelligence will one day surpass the brain’s cognitive capacity, a tipping point referred to as the “singularity” (Stanislaw (1958))). Brad Delong has speculated that, just as “peak horse” was reached in the early part of the 20th century, perhaps “peak human” could be reached during this century (Delong (2014)).

It is worth noting, for balance the OECD Social, Employment and Migration Working Paper, ‘The Risk of Automation for Jobs in OECD Countries: A Comparative Analysis’ by Melanie Arntz, Terry Gregory, and Ulrich Zierahn. The paper specifically addressed the “occupation-based approach proposed by Frey and Osborne (2013), and argues their approach might lead to an overestimation of job automatibility, as occupations labelled as high-risk occupations often still contain a substantial share of tasks that are hard to automate.”

Overall, they find that, on average across the 21 OECD countries, 9 % of jobs are automatable.  They critically reflect on the recent line of studies that generate figures on the “risk of computerisation” and argue that the estimated share of “jobs at risk” must not be equated with actual or expected employment losses from technological advances because:

  1. The utilisation of new technologies is a slow process, due to economic, legal and societal hurdles, so that technological substitution often does not take place as expected.
  2. Even if new technologies are introduced, workers can adjust to changing technological endowments by switching tasks, thus preventing technological unemployment.
  3. Technological change also generates additional jobs through demand for new technologies and through higher competitiveness.

A similarly balanced yet cautionary view is adopted by Merrill Lynch in their A Transforming World report:

The limiting case here would be general purpose robots that are effective substitutes for human labor but at a fraction of the cost. In that case, widespread unemployment could be an outcome – it depends on whether there develops a large enough sector in the economy where humans have a comparative advantage. This could be the arts and entertainment, or personal care services, or areas that involve deeper analytical thinking that is not amenable to existing forms of AI.

How all this will play out in labour markets and more broadly is obviously a matter of speculation, or more productively perhaps scenario analysis and planning.   That latter approach is exemplified by a PWC report published recently “Workforce of the future: The competing forces shaping 2030”, which examines 4 possible scenario ‘worlds’ of work by that time, shaped by Collectivism versus Individualism and Integration versus Fragmentation.  The worlds are: Yellow, where Humans come first; Green, where Companies care; Red, where Innovation rules and Blue, in which Corporate is king – again, worth taking a look at the report.

Central to their thinking for all these is the impact of AI, which they discuss in term of 3 levels: Assisted intelligence, widely available today, such as car GPS navigation systems; Augmented intelligence, emerging today, for example, the combination of programmes that organise car ride‑sharing businesses; and Autonomous intelligence being developed for the future, an example of which will be self‑driving vehicles, when they come into widespread use.

SmartSelectImage_2017-09-19-09-47-35

One of their take-away messages is that organisations can’t protect jobs which are made redundant by technology – but have a responsibility to their people. They urge organisations to protect people not jobs, by nurturing agility, adaptability and re-skilling.

In an up-to-date assessment of the rapid strides being made in artificial intelligence (AI) and robotics, titled “A CEO action plan for workplace automation”, McKinsey Consulting notes tech advances in everyday activities:

For instance, researchers at Oxford University, collaborating with Google’s DeepMind division, created a deep-learning system that can read lips more accurately than human lip readers—by training it, using BBC closed-captioned news video. Similarly, robot “skin” is able to “feel” textures and find objects by touch, and robots are becoming more adept at physical tasks (such as tying a shoelace) that require fine motor skills.

One key to thinking about the future of technology is to focus on what is taken for granted or has merged into the background of daily life.  Changed technologies or behaviours are only seen as novel for a brief window, and then become unremarkable, either because they have been discarded and forgotten, or because they have merged with the fabric of everyday life and are not thought about or explicitly valued as such. Once we use something every day, we do not call these things technology anymore. Whether a stone or a drone, it simply becomes a tool we apply to a task.

However, ‘work’ is a fundamental shaper of everyday life. A perspective on this debate might well be the notion that the change is a transformation of how work is defined, not just a shifting of work from one means of production to another.  The Bank of England economist Haldane discussed a re-shaping of the labour market, rather than a simple projection of increased absolute unemployment. He wondered if “a fundamental reorientation in the nature of work could be underway.”

Calum Chace, author of Surviving AI and the novel Pandora’s Brain was quoted in a Guardian article as saying:

“I think our best hope going forward is figuring out how to live in an economy of radical abundance, where machines do all the work, and we basically play.” Arguably, we might be part of the way there already; is a dance fitness programme like Zumba anything more than adult play? But, as Chace says, a workless lifestyle also means “you have to think about a universal income” – a basic, unconditional level of state support.

The idea of providing everyone with a basic income has been the topic of some considerable interest of late.   To me at least there is significant tension between this idea of providing a significant sum of money to all and sundry irrespective of their other means and the fixation of our current political culture on targeting, means testing, and micromanagement of welfare payments. While-ever our society continues to see a moral virtue in working, and work remains the primary methods of wealth distribution, to me it seems unlikely that we will be able to easily implement the notion of universal basic income, even if it is a good idea.

I actually find it hard to improve on the conclusion I reached writing my position paper on Unemployment, technology change and visual impairment for the RBS in 1982 – “While a great deal needs to be said about technology, its change and its impact, little of this can be said with confidence.”

It may be that the familiar nostrum of economic growth, expanding employment opportunities in new fields not prey to automation, will mean that the technological changes under the umbrella of ‘AI’ will essentially be ‘business as usual.  However, as McKinsey put it “In the past, technological progress has not resulted in long-term mass unemployment, because it also has created additional, and new, types of work.  [However] we cannot know for sure whether these historical precedents will be repeated.”

If these historical precedents do not hold, then without a ‘good public policy idea’ it would seem highly likely the role of work in providing significant and predictable income to large numbers of people will be severely challenged and disrupted. So will the social structures and institutions dependent on that effective wealth distribution. Planning for how society, nationally and globally, might need to respond would seem like a useful insurance policy.

AI – where might we be going?

This blog is based on my observation of AI since I first came across the idea working at the Royal Blind Society in the early 80s – although what AI was then and what it is now are very different beasts. Thinking about developments for the next decade, I settled on the term ‘Sentience’, which deliberately avoids the term AI (although as a loose umbrella term it can be read in), to choose a word reflecting a more modest level of machine capability. This ‘sub-intelligence’ if you like is conceivable from the intersection of a number of network and ICT trends / technologies emerging by the start of the decade (i.e. 2020). It seems to me that ‘sentience’ might best be described in terms of being surrounded by, embedded in, environments that are in some way aware of the individual and their context – various relationships to things, information and other people. It will not be complete but its emergence seems likely to be a dominant theme for the next decade, a logical inheritor of the consequences from digitalisation, convergence and then network developments, with the same kind of wide ramifications for culture and society.

I have been interested in the interaction of humans with technology since my initial university studies in anthropology and sociology.  We are not defined by our tools, but our tools have functioned as an extension of ourselves and as facilitators of interaction with our environment. Over the past millennia they have produced an accelerating transformation of the shape and pace of human society.

This post is shaping to be a two part blog: the first exploring my thoughts and experience with Artificial Intelligence (AI) technology – broadly defined – and the second next week looking at the discussion about the implications of this technology for the world and future of work.

I first came across AI when I was working at the Royal Blind Society (RBS) in the early 80s, although what AI was then and what it is now are very different beasts.  At the time I was completing my Master of Commerce, in which I had pursued my interest in technology in various topics ranging from Information Systems Design to Industrial Relations.  It was in the latter that I engaged with the ideas of sociologist Daniel Bell and the notion of the ‘post-industrial society’.

20170919_101218

It is interesting that the term ‘convergence’ had been much discussed in the seventies. It was a product of the Cold War – the idea that industrial economies would converge in their structure and organisation and that essentially Russia (then the USSR) would come to resemble the US and Europe. There is a whole thesis waiting to be explored in figuring out what happened to that idea in the vortex of history – too much to go into here.

The key learning for me was that the complexity and interrelatedness of technological innovation and change with economic and social factors such as:

20170919_101142

  • Factors of production
  • Technological interdependencies and linkages
  • Organizational structures
  • Sectional, regional and individual distribution of income and wealth. International interdependencies
  • Public and private demand

 

I applied this learning at the RBS when researching the impact of technological change on the employment context and prospects for visually impaired people.  It was a time of considerable technological excitement as the personal computer began to penetrate the mass market – a signature moment was when TIME Magazine nominate the PC as ‘machine of the year’ for 1983.

20170919_091628

One element of technological change which played into these investigations was machine vision: while wildly futuristic at the time, it was also becoming almost imaginable and at the apparent rate of change and innovation seemed possible in a foreseeable future. It turned out that machine vision, particularly ‘in the wild’ was actually much harder than might have been apparent and is something that is only now (2017) starting to find wide spread use in things like self-driving vehicles – and so far as I know has yet to find practical application in the everyday lives of visually impaired people.

However this sparked my general interest in the whole field of computers mimicking or emulating human cognition or perception. Expert systems were an area of market enthusiasm, one that I found particularly interesting. I actually crafted and experimented with my own primitive expert system shell, written from scratch using turbo Pascal which involved delving into the arcane and technical worlds of generative grammars and token parsing as well as algorithmic inference processing.

But ultimately both for myself and the world at large expert systems proved to be a dead end. This this was primarily due to the issue of knowledge capture that is the sheer effort required to manually encode knowledge into decision tree type language. The other limiting factor was the limited processing power and memory storage available on their computers off at the time.

The general interest in AI peaked by the end of the decade – the cover of this Economist 1992 special feature would not have been out of place today, but the discussion is much more about the limitations and cumbersome nature of the technology than grand horizons.

20170919_091743

Increasingly the view came to be that any particularly advanced or clever piece of coding was seen as intelligent while it was a novelty, but rapidly became ‘part of the furniture’ and thence became part of the ‘dumb’ and rather pedestrian reality of IT which came to dominate our working lives.

During the course of the 90s the word ‘convergence’ at least in tech circles changed and came to be much-discussed in terms of the coming together of the traditional silo platforms of broadcasting, telecommunications and print. Pervasive digitalisation  broke the legacy nexus between the shape of content and the container which carried it – a voice call was no longer solely defined by being carried on a plain old Bakelite telephone network; a TV show no longer solely by arriving via a transmission tower and home receiver (the same for radio shows); music spread rapidly beyond the domains of the vinyl record, compact cassette and CD – it got ‘shared’ online; and the Internet carried news much further and faster than a newspaper.  This meant that commerce and regulation constructed on the premise that content could be priced and controlled by how it was delivered increasingly lost its force, both in logic and in practice.

Then over the first decade of the 21st century (the ‘noughties’), IP-based networks and then social networks came to play an ever more important role.  This has meant content became non-linear, interlinked and ‘uncontained’ while people increasingly expected to connect and communicate seamlessly – anywhere, anyhow, anytime. Entire new and massively successful network businesses emerged in the second half of the decade – Google and Facebook to name the most obvious.

‘Silos’ was the convenient way to describe the pre-convergence arrangements and ‘Layers’ was an important alternative way to look at the way the technological environment was changing, as a way to describe the actuality of what was called convergence.  Layers had been in common technical use for a decade or two before this, but it around at this time the general utility of the concept more generally became apparent, since it is native to the way in which networks are constructed and the Internet works.

As the noughties wore on, it also became apparent that ‘layers’ as such could not ultimately and successfully grapple with all the developments in the marketplace.  The ‘bright lines’ between layers are blurring under the impact of virtualisation and software emulation.  An example of virtualization is the way in which several physical computer servers can emulate a single large (virtual) computer OR a single large physical computer can emulate several (virtual) computer servers. This has been extended beyond the enterprise and is essentially the basis for cloud computing – the customer buys the computing and storage they need as virtual resources from the supplier who takes care of the physical requirements.  Multiple, inter-networked free-scale networks which can configure to emulate many other network forms better explain the complexity and rapid adaptability of the market in the current decade, whatever we decide to call it (the ‘tweenies’?).

So the term ‘silo’ was useful shorthand to describe the pre-nineties technological environment, ‘convergence’ summarized the nineties, ‘layers’ was useful for the noughties and ‘networks’ is perhaps most apt for our current decade, which as you may have noticed, is drawing to a close.

This ‘progression’ is reflected in the movement in the discussion of the technological environment from ‘convergence’ to the ‘networked society’ and ‘connected life’.  This shift does not suggest that the transition to the ‘networked society’ is complete, but rather that the concept of the ‘network’ better describes and encapsulates the current dominant movement and theme at work and influencing society during the decade.  Having remarked on this progression, the obvious question is to ask: what is likely to be the concept that fulfils this role in another decade’s time?

My stab at it a few years ago was ‘Sentience’.  I was deliberately avoiding the term AI (although as a loose umbrella term it can be read in) and chose a word that reflected a more modest level of machine capability. This ‘sub-intelligence’ if you like is conceivable from the intersection of a number of network and ICT trends / technologies emerging by the start of the decade (i.e. 2020).  It seems to me that ‘sentience’ might best be described in terms of being surrounded by, embedded in, environments that are in some way aware of the individual and their context – various relationships  to things, information and other people.  It will not be complete but its emergence seems likely to be a dominant theme for the next decade, a logical inheritor of the consequences from digitalisation, convergence and then network developments, with the same kind of wide ramifications for culture and society.

What is different now in the contemporary explosion of interest and practical utilisation of AI is both the remorseless contribution of Moore’s law and the breakthrough in the algorithmic understanding of machine learning and its application in what is called deep learning. This is the technique which led to the victory by AlphaGo (the Google Deep Mind app) when it played the human Go master, Lee Sedol. It is also evident in everyday examples ranging from face-recognition, language translation, predictive text and enhanced search algorithms – things that in the eighties would have been dubbed AI are everywhere!

I am not particularly married to the precise term ‘sentience’ – numerous others exist. For example Shivon Zilis, an Investor at Bloomberg Beta surveyed every artificial intelligence, machine learning, or data related startup she could find (her list had 2,529 of them to be exact).  She addressed the labeling issue of using “machine intelligence” to describe how “Computers are learning to think, read, and write. They’re also picking up human sensory function, with the ability to see and hear (arguably to touch, taste, and smell, though those have been of a lesser focus) … cutting across a vast array of problem types (from classification and clustering to natural language processing and computer vision) and methods (from support vector machines to deep belief networks).”

She noted that:

I would have preferred to avoid a different label but when I tried either “artificial intelligence” or “machine learning” both proved to too narrow: when I called it “artificial intelligence” too many people were distracted by whether certain companies were “true AI,” and when I called it “machine learning,” many thought I wasn’t doing justice to the more “AI-esque” like the various flavors of deep learning. People have immediately grasped “machine intelligence” so here we are.

And I landed on ‘sentience’ – it is important to note that it does not indicate an ‘end state’ but rather flags a way to discuss a possible dominant theme of its decade (say 2020-2030). Another theme will arise and it is relevant to consider what sentience would not describe: to establish the boundary conditions for the concept and think about what may remain ‘undone’ by 2030-ish.  Thinking beyond that boundary may in turn give clues about the shape of the decade and those to follow … noting that such a shape is impossible to discern beyond broad conjecture.

One direction for such conjecture might be about the emergence of ‘machine autonomy’. It can be useful (in terms of imagined scenarios) although increasingly dangerous (due to the temptations and risks of predictive hubris) to speculate even beyond the rise of autonomy to further phases, perhaps the realization of fully conscious artificial intelligence, perhaps the emergence of essentially incomprehensible ‘alien’ machine-based intelligence:

  • 2030s – ‘machine autonomy’?
  • 2040s – AI ‘awareness’?
  • 2050s – ‘Alien’ intelligence?

It occurs to me that perhaps this is where the fruits of ‘convergence’ as mentioned in this blog have come full circle.  It seems that the developments which can loosely be pulled together under the umbrella term AI are genuinely flagging the arrival of ‘post-industrial’ society, that the world Daniel Bell conjured with is emerging in front of our eyes, even if we do (can) not accurately perceive it.  However the shapes we can discern are certainly the source of some anxiety, particularly as related to the world of work – and that will be the topic for my next blog post …

20170720_103157-01