Apparently the Australian Curriculum, Assessment and Reporting Authority wants to introduce so-called robo-marking next year of NAPLAN assessment, arguing that there is evidence that automated marking “met or surpassed” the quality of human markers. I was intrigued the other day to see reports that the NSW education minister said it was “preposterous” to suggest computers could do a better job of marking assessments than teachers.
Followers of this blog (thank you very much for your time) will be aware that I have recently paid a fair amount of attention to AI and its implications (AI – where are we now and how did we get here?, AI and future of work, & Thinking about education, work & AI). From that familiarity I would say that it is far from preposterous that suitably tuned AI software could reliably assess writing assignments, particularly those designed to give standardised comparative outcomes. In fact given the vast data sets that would be generated from testing all school children in Australia it would probably be an almost perfect environment for data driven algorithms. Indeed one does not have to look too far to find examples of AI actually writing similarly ‘algorithmic’ texts – one chosen more or less at random notes that:
The Washington Post started using its homegrown artificial intelligence technology, Heliograf, to spit out around 300 short reports and alerts on the Rio Olympics. Since then … in its first year, the Post has produced around 850 articles using Heliograf.
Frankly I find it preposterous that an education minister should be quite so ignorant!
Just as I was about to publish this blog, in a moment of serendipity and confirmation that this is indeed a current topic of interest, a news email from consultants McKinsey & Company arrived in my email feed – the title: AI in storytelling: Machines as cocreators! It details recent research by the Massachusetts Institute of Technology (MIT) Media Lab investigating the potential for machine–human collaboration in video storytelling using machine-learning models that rely on deep neural networks to “watch” (not just the plot, characters, and dialogue but also more subtle touches, like a close-up of a person’s face or a snippet of music) small slices of video—movies, TV, and short online features—and estimate their positive or negative emotional content by the second.
Machines can view an untagged video and create an emotional arc for the story based on all of its audio and visual elements. That’s something we’ve never seen before – machines that could identify common emotional arcs in video stories.
My incredulity was further heightened when last Wednesday I attended what was styled as ‘The Great Debate: Humans, Data, AI & Ethics’ organised by the UTS Connected Intelligence Centre. Using a classic debate format two teams presented the positive and negative cases for the proposition that: ‘Humans have blown it: it’s time to turn the planet over to the machines’.
It was both entertaining and informative stuff. The negative team, that is the pro-humans, (that’s not them in the photograph!) had a convincing win. They presented well-argued humanist propositions that humans are indispensable and hence perhaps would seem to be on the side of the good minister. However the subtle difference was that these folks are well versed in the algorithmic data-driven world of today – they well accepted that the world of the future is one of human-machine collaboration and possibly even partnership.
The critical and unique contribution from humans was argued to be creativity, which machines cannot recreate. It is a fascinating area of contention as to whether that will always be the case. To some extent at least it depends on what you define as creative. It is something I have been thinking and writing about since the nineties. I noted back then as a challenge in considering creativity that a lot of traditionally artistic activity isn’t necessarily all that ‘creative’, in the sense of producing something novel or unexpected. How come? Because much painting, drawing, poetry making and creative writing are functioning at a technical level, where skill is important: blissful flow states are achievable; beautiful works can be created. And indeed these ‘merely technical skills’ are what AI is being aimed at, with increasing success. But this is often not technical skill, as such, that marks out ‘more’ creative artists at work.
The suggestion here would be that it is that these people are pushing the ‘grammar’ or patterns of their field, inventing whole new worlds or universes of discovery and discourse. That is why Picasso is a genius, if not a prodigy working exceptionally high in the stack, where he opened and explored entire new universes of artistic expression. The question is not usually his technical skill although his draughtsmanship and painterliness cannot seriously be questioned. His gift was that he introduced modern art (among others) the art of Africa, Surrealism and the unconscious, Cubism. He was always changing and exploring further. David Bowie is perhaps another more contemporary example of artistic invention and reinvention.
A contrast can perhaps be made with Salvador Dali, who, while consummately competent at his Surrealist paintings, stuck with these until they were a genre, and he its epitome. Once he was established in that grammar he stayed in it, working in established patterns rather than making new grammars. Jackson Pollock also produced singular work but in one frame after which he tragically and sadly flamed out.
I recently read a similar observation in a review of the Hyper Real exhibition at the National Gallery of Australia (thanks for sharing Loes), which noted that “manual dexterity, so valued only a generation ago, is growing increasingly redundant. As in most good art, it is the conceptual framework that is of higher value than the virtuosity of the execution and, as a matter of fact, many of the hyperreal artists leave the manufacture of their work to technicians.”
I also found interesting resonance with this topic in a presentation about the work of Hubert Dreyfus and his model of skill acquisition at last month’s facilitators’ network meeting. This model proposes five stages of skill acquisition ranging from novice to expert. There is a pretty good Wikipedia page on it which I can leave you to peruse but the key take-away in this context is the way in which the expert can transcend reliance on rules, guidelines, and maxims and instead rely on an intuitive grasp of situations based on deep, tacit understanding. It is this deep tacit understanding that can then lead the expert to become a domain innovator and inventor of frameworks of novel rules, guidelines etc.
It is also interesting that Dreyfus as a philosopher was a long-standing critic of artificial intelligence, particularly the philosophical naive and mathematically formulated versions attempted in the last century. I suspect the deep learning, neural based, data driven approaches common today would be less susceptible to his humanist objections – the topic for another day perhaps, interesting to me since his philosophical approach used thinking I explored extensively in writing my Sociology Master’s thesis in the late seventies.
However to my mind there is much more to the currently unique human creative capacity than that of expert skill mastery… for example I identified a paradox of creativity in my previous thinking, which was that in some ways the more familiar you become with a particular field of work, the less creative you become. Specialists may take choices early in the mastery of the discipline, and rarely if ever revisit the taken for granted aspects of the practise, forgetting or never realising that some of their basic assumptions are actually choice based. As skill and mastery increases, one makes conscious reference to the domain framework less and less ad certain skills become wholly automatic and habitual – no thought required. Great skill perhaps, but at the same time the creative envelope has narrowed. The best expert in Dreyfus is scheme can and do achieve innovative thinking but it is manifestly not easy. Breaking through and recovering creative naivety toward acquired and mastered subjects can be very difficult.
One solution is to recruit and teach neophytes, and to observe them carefully as they learn. It is a commonplace observation that people new to a field can often offer significant innovations and insights, although they may lack the ability to fully realise them. As we cover the elementary ground that specialists left behind years before, we make observations and explore directions which emerge early in the problem space of the discipline, and may have being laying fallow. A non-altruistic reason to pass on your skills and to be patient with learners! Working out what you do well enough to teach it to others can force a re-evaluation sufficient to jolt insight.
The other thing here is that as we learn a new subject or revisit a current domain from first principles, we can cross over previously acquired knowledge and generate novelty from that collision – something I touched on in my blog about the specialist generalist.
This ‘crossing over’ was something I demonstrated for myself in a modest way when I combined my recent charcoal sketches from the U3A drawing group with various photographs of flowers and street art – e.g.
The drawings have been essentially for practice and the photographs relatively run-of-the-mill – neither particularly creative in themselves. However integrating and overlaying the individual images has generated results which I have found to be genuinely creative, the emergent consequence of combining two different skill sets and impulses.
I recall there was a discussion at a UTS Hatchery AI meeting about the social dimension to creativity, that AI technology, like other technologies, can help a person (human) deliver ‘better’ more technically polished work. This resonates with the conclusions of the MIT work reported above, which said:
These insights will not necessarily send screenwriters back to the drawing board—that would be like asking George Orwell to tack a happy ending onto 1984 to cheer things up. But they could inspire video storytellers to look at their content objectively and make edits to increase engagement. That could mean a new musical score or a different image at crucial moments, as well as tweaks to plot, dialogue, and characters. As storytellers increasingly realize the value of AI, and as these tools become more readily available, we could see a major change in the way video stories are created. In the same way directors can now integrate motion capture in their work, writers and storyboarders might work alongside machines, using AI capabilities to sharpen stories and amplify the emotional pull.
That is the collaborative dimension of creative AI – the essential ingredients provided by the human are context and creative judgement, which will perhaps remain uniquely human.
But I also note that as we co-create in partnership with our ever more intelligent, useful and responsive machines, they will be learning about our context and judgement, perhaps ultimately to appreciate the former and exercise the latter. As they do so there will be potentially profound effects on the human creative process – just as we shape our machines, so they shape us. Dreyfus was cited at the facilitators’ network meeting as expressing this reciprocity between humans and their technology thus:
“As the carpenter shapes the desk, so the desk shapes the carpenter.”