AI automation boost and trade union concern as safety summit looms

Tue 31 Oct 2023
Posted by: Benjamin Roche
Trade News

Computer chip

Over half of UK manufacturers are investing in artificial intelligence (AI), machine learning or augmented reality and nearly two in five plan to adopt generative AI in how they operate, according to a new survey by industry body Make UK and software firm Infor.

Efficiency

The Manufacturer details that companies who responded by saying that they are investing in AI and machine learning usually say they are pursuing increased organisational efficiency. Four fifths of companies queried added that they were already using virtual or augmented reality for design work and prototyping.

A total of 76% said they were investing in automation more broadly, while 60% report improved productivity because of automation investment, and 49% report better labour efficiency.

Over a quarter of companies say that automation will mean they will need to employ fewer lower-skilled workers.

Workers’ warning

Some are far from pleased at the prospects posed for workers by accelerating AI adoption, with trade unions and rights campaigners this week signing a letter to UK prime minister Rishi Sunak arguing that the AI Safety Summit due this week is “squeezing out” workers from the conversation around the technology, according to the FT.

Groups from the UK Trades Union Congress to Amnesty International are signatories, with the letter arguing that, for “millions of people in the UK and across the world, the risks and harms of AI are not distant — they are felt in the here and now”.

The letter criticises the guest list at the summit for featuring technology and government figures while leaving out representatives of people from wider society and other industries.

It states that “small businesses and artists are being squeezed out” while “innovation [is] smothered” by the “power and influence” of big tech.

“A wide range of expertise and the voices of communities most exposed to AI harms must have a powerful say and equal seat at the table.”

‘Match-fit’ Britain

The UK government has today (31 October) made its own AI announcement in advance of the safety summit, with £118m of funding allocated for AI-related skills training in an effort to make Britain “match-fit” for the future of the industry.

Also among the provisions are a new visa scheme and £1m grant fund to help firms to bring AI specialists to work in the UK.

The government for the first time named its previously announced Centres for Doctoral Training in AI, and outlined plans for 15 science and technology scholarships at top UK universities. It will also pilot a new STEM Olympiad scholarship scheme, ‘Backing Invisible Geniuses’.

Michelle Donelan, secretary of state for science and technology, said the measures will “future-proof” the UK and its economy for development in AI.

Labour force fears

Donelan expressed more thoughts on the technology’s effect on jobs this week as she told The Times it would change the labour market.

“If we look back to 1940, 60% of the jobs [we have today] didn’t exist then according to a number of studies.

“This is about enabling and assisting people in their jobs, not taking away their jobs. But yes, it will change our labour market.”

She added that AI could ease labour shortages in areas such as medicine or teaching by allowing doctors to spend more time with patients, or teachers with students, rather than dealing with “admin” or “bureaucracy”.

US safety move

US president Joe Biden also weighed in on the AI question yesterday (30 October), issuing a presidential decree ordering AI companies, whose technology could have national security implications, to share the results of any safety research with the government.

“To realise the promise of AI and avoid the risk, we need to govern this technology, there’s no way around it,” Biden said.

AI specialist Gary Marcus told the BBC the order “sets a high initial bar” for government action on AI safety.

The order also provides new guidelines for US federal agencies to measure privacy concerns and measures in AI models, as well as to prevent discrimination in how those models are designed.