Search

NSW: new digital AI risk management tool in development

The following article is a news item provided for the benefit of the Workplace Health and Safety profession. Its content does not necessarily reflect the views of the Australian Institute of Health & Safety.
Date: 
Monday, 27 September, 2021 - 12:00
Category: 
Policy & legislation
Location: 
New South Wales

The NSW Government’s Centre for Work Health and Safety is developing a digital artificial intelligence (AI) risk management tool to help businesses manage workplace health and safety risks when introducing and using the technology.

Around 70 per cent of Australian companies are expected to adopt at least one type of AI technology by 2030, said centre director, Skye Buatava.

“While AI may provide efficient solutions to business operations, there are new potential work health and safety risks to workers,” Buatava said.

“We are conducting further research to establish evidence-based actions businesses can take to help address identified risks, while developing a user-friendly AI WHS risk management tool.

“The centre has instigated two studies identifying over 50 risks to inform the tool’s development, in partnership with the University of Adelaide we have exploring the ethical use of AI at work, while our work with Charles Sturt University is examining how businesses can trust new processes.

“WHS risks were found to be present throughout the planning, implementation and continued use of AI technology, and it is crucial that we understand these risks now and provide guidance to businesses before AI becomes mainstream.

“So far we have consulted with more than 80 experts from business, government and academia – the feedback and planning we are undertaking now will go a long way to ensuring workplace safety as the technology becomes available,” Buatava said.

The centre is currently undertaking a number of projects, including trusting artificial intelligence at work, which examines human-machine interactions to enhance the understanding of arising risks as ‘thinking machines’ continue to be introduced in workplaces.

This project explores what factors workers report influence their acceptance (or rejection) of machine-generated advice, and what types of tasks or workplaces are likely to elicit this response.

Another project the centre is undertaking is around the ethical use of artificial intelligence in the workplace.

The introduction of AI can have unintended consequences on a worker’s wellbeing, outside of what could traditionally be recognised as a harm, and this project explores the appropriate ethical handling of AI technology.