I have been interested in the interaction of humans with technology since my initial university studies in anthropology and sociology. We are not defined by our tools, but our tools have functioned as an extension of ourselves and as facilitators of interaction with our environment. Over the past millennia they have produced an accelerating transformation of the shape and pace of human society.
This post is shaping to be a two part blog: the first exploring my thoughts and experience with Artificial Intelligence (AI) technology – broadly defined – and the second next week looking at the discussion about the implications of this technology for the world and future of work.
I first came across AI when I was working at the Royal Blind Society (RBS) in the early 80s, although what AI was then and what it is now are very different beasts. At the time I was completing my Master of Commerce, in which I had pursued my interest in technology in various topics ranging from Information Systems Design to Industrial Relations. It was in the latter that I engaged with the ideas of sociologist Daniel Bell and the notion of the ‘post-industrial society’.
It is interesting that the term ‘convergence’ had been much discussed in the seventies. It was a product of the Cold War – the idea that industrial economies would converge in their structure and organisation and that essentially Russia (then the USSR) would come to resemble the US and Europe. There is a whole thesis waiting to be explored in figuring out what happened to that idea in the vortex of history – too much to go into here.
The key learning for me was that the complexity and interrelatedness of technological innovation and change with economic and social factors such as:
- Factors of production
- Technological interdependencies and linkages
- Organizational structures
- Sectional, regional and individual distribution of income and wealth. International interdependencies
- Public and private demand
I applied this learning at the RBS when researching the impact of technological change on the employment context and prospects for visually impaired people. It was a time of considerable technological excitement as the personal computer began to penetrate the mass market – a signature moment was when TIME Magazine nominate the PC as ‘machine of the year’ for 1983.
One element of technological change which played into these investigations was machine vision: while wildly futuristic at the time, it was also becoming almost imaginable and at the apparent rate of change and innovation seemed possible in a foreseeable future. It turned out that machine vision, particularly ‘in the wild’ was actually much harder than might have been apparent and is something that is only now (2017) starting to find wide spread use in things like self-driving vehicles – and so far as I know has yet to find practical application in the everyday lives of visually impaired people.
However this sparked my general interest in the whole field of computers mimicking or emulating human cognition or perception. Expert systems were an area of market enthusiasm, one that I found particularly interesting. I actually crafted and experimented with my own primitive expert system shell, written from scratch using turbo Pascal which involved delving into the arcane and technical worlds of generative grammars and token parsing as well as algorithmic inference processing.
But ultimately both for myself and the world at large expert systems proved to be a dead end. This this was primarily due to the issue of knowledge capture that is the sheer effort required to manually encode knowledge into decision tree type language. The other limiting factor was the limited processing power and memory storage available on their computers off at the time.
The general interest in AI peaked by the end of the decade – the cover of this Economist 1992 special feature would not have been out of place today, but the discussion is much more about the limitations and cumbersome nature of the technology than grand horizons.
Increasingly the view came to be that any particularly advanced or clever piece of coding was seen as intelligent while it was a novelty, but rapidly became ‘part of the furniture’ and thence became part of the ‘dumb’ and rather pedestrian reality of IT which came to dominate our working lives.
During the course of the 90s the word ‘convergence’ at least in tech circles changed and came to be much-discussed in terms of the coming together of the traditional silo platforms of broadcasting, telecommunications and print. Pervasive digitalisation broke the legacy nexus between the shape of content and the container which carried it – a voice call was no longer solely defined by being carried on a plain old Bakelite telephone network; a TV show no longer solely by arriving via a transmission tower and home receiver (the same for radio shows); music spread rapidly beyond the domains of the vinyl record, compact cassette and CD – it got ‘shared’ online; and the Internet carried news much further and faster than a newspaper. This meant that commerce and regulation constructed on the premise that content could be priced and controlled by how it was delivered increasingly lost its force, both in logic and in practice.
Then over the first decade of the 21st century (the ‘noughties’), IP-based networks and then social networks came to play an ever more important role. This has meant content became non-linear, interlinked and ‘uncontained’ while people increasingly expected to connect and communicate seamlessly – anywhere, anyhow, anytime. Entire new and massively successful network businesses emerged in the second half of the decade – Google and Facebook to name the most obvious.
‘Silos’ was the convenient way to describe the pre-convergence arrangements and ‘Layers’ was an important alternative way to look at the way the technological environment was changing, as a way to describe the actuality of what was called convergence. Layers had been in common technical use for a decade or two before this, but it around at this time the general utility of the concept more generally became apparent, since it is native to the way in which networks are constructed and the Internet works.
As the noughties wore on, it also became apparent that ‘layers’ as such could not ultimately and successfully grapple with all the developments in the marketplace. The ‘bright lines’ between layers are blurring under the impact of virtualisation and software emulation. An example of virtualization is the way in which several physical computer servers can emulate a single large (virtual) computer OR a single large physical computer can emulate several (virtual) computer servers. This has been extended beyond the enterprise and is essentially the basis for cloud computing – the customer buys the computing and storage they need as virtual resources from the supplier who takes care of the physical requirements. Multiple, inter-networked free-scale networks which can configure to emulate many other network forms better explain the complexity and rapid adaptability of the market in the current decade, whatever we decide to call it (the ‘tweenies’?).
So the term ‘silo’ was useful shorthand to describe the pre-nineties technological environment, ‘convergence’ summarized the nineties, ‘layers’ was useful for the noughties and ‘networks’ is perhaps most apt for our current decade, which as you may have noticed, is drawing to a close.
This ‘progression’ is reflected in the movement in the discussion of the technological environment from ‘convergence’ to the ‘networked society’ and ‘connected life’. This shift does not suggest that the transition to the ‘networked society’ is complete, but rather that the concept of the ‘network’ better describes and encapsulates the current dominant movement and theme at work and influencing society during the decade. Having remarked on this progression, the obvious question is to ask: what is likely to be the concept that fulfils this role in another decade’s time?
My stab at it a few years ago was ‘Sentience’. I was deliberately avoiding the term AI (although as a loose umbrella term it can be read in) and chose a word that reflected a more modest level of machine capability. This ‘sub-intelligence’ if you like is conceivable from the intersection of a number of network and ICT trends / technologies emerging by the start of the decade (i.e. 2020). It seems to me that ‘sentience’ might best be described in terms of being surrounded by, embedded in, environments that are in some way aware of the individual and their context – various relationships to things, information and other people. It will not be complete but its emergence seems likely to be a dominant theme for the next decade, a logical inheritor of the consequences from digitalisation, convergence and then network developments, with the same kind of wide ramifications for culture and society.
What is different now in the contemporary explosion of interest and practical utilisation of AI is both the remorseless contribution of Moore’s law and the breakthrough in the algorithmic understanding of machine learning and its application in what is called deep learning. This is the technique which led to the victory by AlphaGo (the Google Deep Mind app) when it played the human Go master, Lee Sedol. It is also evident in everyday examples ranging from face-recognition, language translation, predictive text and enhanced search algorithms – things that in the eighties would have been dubbed AI are everywhere!
I am not particularly married to the precise term ‘sentience’ – numerous others exist. For example Shivon Zilis, an Investor at Bloomberg Beta surveyed every artificial intelligence, machine learning, or data related startup she could find (her list had 2,529 of them to be exact). She addressed the labeling issue of using “machine intelligence” to describe how “Computers are learning to think, read, and write. They’re also picking up human sensory function, with the ability to see and hear (arguably to touch, taste, and smell, though those have been of a lesser focus) … cutting across a vast array of problem types (from classification and clustering to natural language processing and computer vision) and methods (from support vector machines to deep belief networks).”
She noted that:
I would have preferred to avoid a different label but when I tried either “artificial intelligence” or “machine learning” both proved to too narrow: when I called it “artificial intelligence” too many people were distracted by whether certain companies were “true AI,” and when I called it “machine learning,” many thought I wasn’t doing justice to the more “AI-esque” like the various flavors of deep learning. People have immediately grasped “machine intelligence” so here we are.
And I landed on ‘sentience’ – it is important to note that it does not indicate an ‘end state’ but rather flags a way to discuss a possible dominant theme of its decade (say 2020-2030). Another theme will arise and it is relevant to consider what sentience would not describe: to establish the boundary conditions for the concept and think about what may remain ‘undone’ by 2030-ish. Thinking beyond that boundary may in turn give clues about the shape of the decade and those to follow … noting that such a shape is impossible to discern beyond broad conjecture.
One direction for such conjecture might be about the emergence of ‘machine autonomy’. It can be useful (in terms of imagined scenarios) although increasingly dangerous (due to the temptations and risks of predictive hubris) to speculate even beyond the rise of autonomy to further phases, perhaps the realization of fully conscious artificial intelligence, perhaps the emergence of essentially incomprehensible ‘alien’ machine-based intelligence:
- 2030s – ‘machine autonomy’?
- 2040s – AI ‘awareness’?
- 2050s – ‘Alien’ intelligence?
It occurs to me that perhaps this is where the fruits of ‘convergence’ as mentioned in this blog have come full circle. It seems that the developments which can loosely be pulled together under the umbrella term AI are genuinely flagging the arrival of ‘post-industrial’ society, that the world Daniel Bell conjured with is emerging in front of our eyes, even if we do (can) not accurately perceive it. However the shapes we can discern are certainly the source of some anxiety, particularly as related to the world of work – and that will be the topic for my next blog post …