Join the Q: CogX 2020
Created | Updated Jun 28, 2020
Join the Q: CogX 2020
On 4 June, 2020 Robbie Stamp Researcher 5 announced that he was hosting a panel discussion at a conference called CogX and he had some tickets to give away to h2g2ers, so I ordered one. Because of worldwide travel restrictions imposed to tackle the COVID-19 coronavirus outbreak, the conference couldn't take place in London as planned. However, the organisers arranged for it to take place online instead.
Day 1 - Technical Trouble
Day 1 (8 June) illustrated some of the complexities of converting a conference from the physical to the virtual realm. I logged on at 8.30am so that I would be ready to attend the welcome talk at 8.45am, but only received my conference keycode e-mail at 8.40am. It took me 15 minutes to use the keycode and complete my registration and then, when I finally found the welcome talk zone, I discovered the live stream hadn't yet started so I wasn't late after all. The organisers got the video going at 9.10am so I watched the introductory speech and learned how to book on to sessions using the agenda.
The first talk I went to was by the Ada Lovelace Institute, and asked, 'What does 'good' look like in Technosociety?' The speakers, participating from their homes via webcams, each took turns to speak, so the debate was well organised. The topic was about how online platforms (such as social media and search engines) are not neutral, in the sense that they make decisions about how their content is managed, and some of those decisions could cause harm to some people as well as providing benefits to others. However, 'blaming the technology for the harm would be like blaming the chemical elements for a poisoning' as humans are involved and accountable in the decision making. (This foreshadowed Robbie's session the following day.)
The second talk I attended was entitled, 'Real World Deployments of Data'. It covered part of the history of computing in the US Army, and how the usability of software was even more important than the functionality it offered - the database system that was less sophisticated but easier to use generated more useful insights than the powerful system that people struggled to load accurate data into.
Day 2 - Robbie's Turn
'The Day of Judgement: What does it mean to have a meaningful working relationship with a non-human actor?' was the session chaired by Robbie Stamp, representing Bioss International, alongside Futurist Adah Parris, Director of Research Lorraine Dodd, Barrister Louise Hooper and Chief Digital Officer Tony Fish.
Robbie introduced the session with a photograph of Douglas Adams and described Douglas' 'Parable of the Puddle'1. Then various thought experiments were intrdoduced illustrating The Five 'A's framework for working with Artificial Intelligence (AI):
- Advisory: Is the AI supporting human decisionmaking? Who is listening to the AI's advice?
- Authority: Does the AI have power over human activity?
- Agency: Does the AI have the ability to do what it wants when it wants, without a human being in the loop?
- Abdication: What would be the consequence of humans abdicating responsibility for the decision making and giving the responsibility to the AI?
- Accountability: Who is giving responsibility to the AI, what responsibility is being given, and why?
As with the session on Day 1, Accountability was a key concept, including how it is not meaningful to punish artificial intelligence, because a human made the decisions about what to teach it and allow it to do (and indeed not teaching the AI something, eg that racist jokes are unacceptable, has the potential to cause as much harm as actively teaching it to promote unacceptable behaviour). The debare was interesting and lively, but the analysis of the thought experiments was perhaps less insightful than it could have been if the panellists had had chance to prepare their responses to such complex questions in advance.
Viewers were able to submit their own questions in text form during the presentation, and I was pleased to see submissions from Dmitri and Milla. There was a separate Q&A session when the text questions were tackled, but again it was difficult for the panellists to respond without time to prepare, so the questions were in the most part answered with more questions (and a reference to Marvin the Paranoid Android). All very thought-provoking, though, as there are no easy answers that apply to all scenarios.
After the session, I tested the networking functionality that aimed to simulate the experience of encountering people face to face at a physical conference. I tried a 'handshake' with Robbie, but that didn't work. However, I was able to 'handshake' Milla so we had a text conversation and learned that we could book a virtual meeting room to video chat if we wished.
Day 3 - Music, Technology and Coincidences
The final session I attended was 'Music and Technology' featuring Marcus Du Sautoy, who is Simonyi Professor for the Public Understanding of Science and Professor of Mathematics at Oxford University. I enjoy watching him on television, and I had the privilege to meet him at a maths conference several years ago. Coincidentally, he is working on a musical composition using technology in collaboration with Emily Howard, a composer whom I have also met as we went to the same school for a time!
The discussion also featured Taryn Southern, a singer who composes with AI-generated music, and Beatie Wolfe, who composes music using traditional instruments, such as guitar, but uses technology to bring her music to life, such as by weaving songs into fabric in such a way that they can be read by smartphones.
The presentation referenced Ada Lovelace, who had realised that machines could do more than just arithmetic and envisaged a machine that could 'compose elaborate and scientific pieces of music'. The works of Bach have been a popular choice for training AI music generators as they contain many distinctive patterns. One AI was tested by being taught some pieces by Bach, and then being asked to fill in the gaps in another of Bach's pieces. The result was pleasing to the audience, but the pianist struggled to play the AI's parts because they had not been designed to take the ability of human hands into consideration. Another test, where an AI trained on Bach was asked to compose a piece in the style of Bach, was less successful, as the AI did not start with an overarching structure so the piece became more complicated over time and yet was less interesting to the audience.
The Q&A session didn't go very well, as they ran out of questions in the first 20 minutes, then played one of Beatie's music videos while they were waiting for more. I managed to come up with a question, but the live stream was cut off after 30 minutes, rather than the 45 minutes that had been scheduled, so my question was never answered.
The final messages from the session were pleasing, though. The panel agreed that AI music generation software is best used in collaboration rather than competition with humans, as a different kind of musical instrument. It generates fresh ideas that help humans to avoid 'blank page syndrome' where they don't know how to get started on something new. And also it helps to stop humans churning out the same type of music and behaving like machines.