A ‘pro-innovation future’? UK universities receive investment for AI research
On the 6th February, The UK Research and Innovation organisation (UKRI) announced that £100
million will be invested by the British government into new research hubs throughout the United
Kingdom.
These projects aim to develop AI technology that people can interweave into
their everyday lives. The hope is that healthcare, education, creative industries, and electrical industries will transform as AI grows.
The Arts and Humanities Council’s programme, ‘Bridging Responsible AI Divides’ (BRAID)
seeks to clarify the emerging creative world of AI by investing in research universities including:
Edinburgh, Bournemouth, Sheffield, Northumbria, Nottingham, Glasgow, Warwick, and Queen Mary
University of London. The government’s investment in AI’s possibilities on this scale suggests
the unfamiliar presence is here to stay.
Dr Emily Bell, the University of Leeds’ lecturer in Digital Humanities at the School of English and
Deputy Director of Postgraduate Research, conversed that ‘there is real potential here to free people up from some of the most repetitive parts of their jobs to do the things that are uniquely human’. AI has the potential to liberate us to explore our potentials by taking away the time-consuming, menial tasks which can be done so quickly by technology available.
However, she warns of AI tools not being ‘transparent about how they work and where the data comes from’; people are using these tools without being ‘aware of those kinds of issues. We need to build a much higher level of digital literacy and think through the implications of using AI in different areas.’
Bournemouth University’s Dr Szilvia Ruszev will work to develop human AI collaboration in media
creation as a goal to provide job security. How ‘job security’ will be achieved is ambiguous when the
futures of workers and students across all fields are continuously investigated without consensus. A
lingering risk is that AI will actually reinforce menial labour if used in a certain way, much like the introduction of machinery under industrialism did. In this context, it is important to question how AI will operate under forces of capitalism and control.
One example of AI use is the University of Sheffield working with Leeds’ own Royal Armouries Museum at Clarence Dock, to enhance visitor experiences in museums by using AI to communicate colonial history. The distinction between technology and human experience seems to be dwindling despite statements which attempt to reassure that this won’t happen. The paradox here is: would lived memories not be interpreted and presented most thoughtfully if communicated by human voices?
Another use of AI is the CHAI-EPSRC AI hub created by academics at the University of Edinburgh. This hub will digitise some elements of healthcare, predicting outcomes whilst personalising treatments for patients. This research may open doors to more efficient healthcare. Again, questions remain about whether economical, pragmatic treatments outweigh humane concerns.
AI is common in cutting-edge university research. Other examples include Oxford University’s professor Michael Bronstein is developing the abilities of AI in processing mathematical foundations. The University of Liverpool is collaborating with Imperial College London to develop AI’s knowledge on chemical alchemy. These advanced example of AI’s ‘knowledge’ demonstrate how innovation is working its way into multiple, highly advanced fields, all justified under a uniform ‘pro-innovation’ approach.
‘Artificial’ implies being ‘in imitation of something which occurs naturally’ according to the Oxford
English Dictionary and even derives from the Old French adjective ‘artificiel’ which in 1532 suggested cunning. ‘Intelligence’ derives from Old French to mean comprehension or a spiritually qualified being. The noun phrase uncannily suggests a fragmented yet developing imitation of human capabilities which is progressing at rapid, perhaps unsettling, pace.
So what does the far future look like? This £100 million investment is only the beginning after the AI
safety summit at Bletchley Park in November of 2023. The UKRI organisation’s chief executive,
Professor Dame Ottoline Leyser, revealed a £1 billion portfolio of investments into researching AI
technologies whilst remaining responsible and trustworthy. Qualities of reliability and invention do not
often work in tandem yet these promises are being made to us students and the working public. The
AHRC have confirmed a further £7.6 million to fund a second phase of the BRAID programme.
As Dr Emily Bell mentions, ‘AI is, inevitably, going to be a huge part of many people’s lives’, but how can success be guaranteed when AI’s limits work in the company of capitalism?
Amongst many questions which remain to be asked, who will be held accountable, if, or perhaps, when AI goes wrong?