Spotlight profile: Professor Mark Sujan

News | Posted on Friday 27 September 2024

Professor Sujan joins the Centre as Chair in Safety Science. He tells us what first inspired him to work in Human Factors and safety science, and the most important challenges for AI technologies and their use - now and in the future.

A man with olive skin, brown eyes, dark grey hair and dark grey moustache and beard wearing a navy v-neck jumper over a lavender-coloured shirt smiling at the camera.

Can you tell us more about your role as Chair in Safety Science?

As Chair in Safety Science at the Centre for Assuring Autonomy (CfAA) I am working in the area of human-centred AI. This is about developing safety science that is meaningful for addressing problems with AI and autonomous systems in practical settings. It means thinking in an applied way about how we can develop theories and methods fit for use in modern complex systems. This aligns with the Centre’s aim to make an impact in the real world.

Safety science is concerned with understanding how systems in hazardous contexts function, and how we can design, operate, maintain, decommission and assure them. When we say “function” this includes both successful as well as unsuccessful performance. Safety science draws on other disciplines including engineering, psychology and social science. 

As well as working at the CfAA, I continue to work for the Health Services Safety Investigations Body (HSSIB), which is an ‘arm’s length’ body of the Department of Health and Social Care. Together with my role at the Centre, this allows me to work at the intersection of academia and practical settings.

My role is practically focused, and I try to engage with a broad range of people, which is a theme that has run through my working career. Much of the time, I am translating theories and methods into something meaningful for those who work in improving safety. I have worked in domains as diverse as railways, the petrochemical industry, and air traffic control, which each have their own challenges. However, one thing they have in common is that the traditional approaches in safety science are reaching their limits in terms of being applicable to the challenges presented by modern complex systems. This is where academic safety science comes in and can help develop this further. We need new safety theories and methods, and I want to develop these so they are meaningful to the practitioners in the real world. This means engaging with and reaching out to a diverse range of stakeholders.

By background you are a Chartered Ergonomist (C.ErgHF) and Human Factors Specialist. How would you describe Human Factors and Ergonomics (HF/E) to someone who doesn’t know much about it?

The discipline of HF/E is largely about studying work as it is actually done and then thinking together with those people who do the work about how the system can be improved and how we can help people work more safely. This perspective informs my work in human-centred AI and it covers many aspects: the interaction with technology, interaction with other people, interactions in a physical space, organisational interactions (for example, staffing levels), and interactions with the external environment - for example, the influence of national targets and regulations. In this way, it gives you a whole systems perspective.

Sometimes HF/E is misunderstood or misrepresented. You can see this when people say that an accident was caused by “the human factor”. But HF/E is not about human error - it is about systems and designing interactions so that people can work safely and effectively.  

What first inspired you to begin work in Human Factors and safety science?

This is probably a good opportunity to say thank you to my PhD supervisor Professor Antonio Rizzo. Professor Rizzo is a cognitive scientist based at the University of Siena in Italy. It was Antonio who first introduced me to the work of the psychologist Lev Vygotsky and the socio-cultural tradition. In our discussions, Professor Rizzo opened up my mind to look at work and the safety of work in a completely different way. And once you have been exposed to this way of thinking, there’s just no going back.  

How will AI technologies impact the health sector? 

AI has tremendous potential to revolutionise healthcare: it can be personalised to a patient’s specific needs; it can be used to recognise predisposition to disease; it can help with logistics, and in low and middle-income countries it can be used to address significant shortages in the availability of qualified healthcare professionals.

The NHS as a health system is under tremendous pressure: financial, work force, an ageing population, novel ways of treating patients and disease. People are hopeful that the use of AI can help cope with such pressures. For example, one area where we have a significant backlog is in terms of imaging and diagnostics, such as breast cancer screening. Where a mammogram is currently looked at by two people, in the future this could be done by just one person supported by an AI tool in order to better manage the backlog. 

However, these benefits will only be realised if we move away from the current technology focus, and consider AI as part of a wider socio-technical system. So, while machine learning (ML) experts quite naturally focus on things such as data quality and bias in the data, it is not as simple as saying that good ML processes are sufficient to ensure that AI can be used safely and efficiently. Safety issues typically arise at the point where AI is integrated into clinical systems, and where people and technologies interact. 

So, we need to understand and design the interactions of the AI within the wider system. For example, how do we design the interface of the AI so that the user can easily spot any errors or inadequate outputs? How do we ensure that the use of the AI doesn’t have any negative impact on the workflow? Have we considered the potential impact of the use of AI on the relationship between people?  

With your expertise what are you hoping to bring to the Centre?

In addition to research, and as one of my key professional and personal priorities, I would like to bring an applied perspective to communicate our research and its application to a much wider audience. I would like to improve awareness and the professionalisation of HF/E as a discipline and make it more accessible so that people can appreciate the contribution HF/E can make.

We’re already moving towards achieving this through some of the projects previously completed at the Centre. Back in 2020, the Council of the Chartered Institute of Ergonomics and Human Factors (CIEHF), of which I am a Trustee, published a White Paper on “Human Factors and Ergonomics in Healthcare AI” in collaboration with the Assuring Autonomy International Programme (now the Centre for Assuring Autonomy), which is among the most successful publications of the CIEHF. It led to inclusion of HF/E in a new British standard ‘BS 30440 - Validation framework for the use of AI within healthcare‘. In this way, the White Paper was useful for making the findings accessible to policy makers and to broaden the outreach.

What kind of companies / partnerships are you interested in working with / forming, and why?

As a Centre and as a community, we are very much interested in talking to anyone, whether they’re working in the domains of healthcare, energy, aviation and air traffic management, maritime and beyond. We’re actively seeking collaboration with anyone working in these domains with an interest in ensuring that we can design and use AI and autonomous systems safely. 

We want to partner with people who are developing AI technologies across different sectors with an interest in developing and assuring them so they are useful, meaningful and safe in their ultimate deployment, as well as individuals from the operational domains where people are deploying AI technologies. 

We’re also interested in working with those stakeholders affected by AI, for example patients, to understand their assurance needs and how we can satisfy those needs. We are actively working with policy makers, regulators such as MHRA and national standardisation bodies such as the British Standards Institution to include systems thinking and modern safety science in their activities.

What do you think are some of the most important challenges for AI technologies and their use?

There are several challenges and things we are thinking about. Firstly, how immediate they are: are AI technologies already in use or close to being used? The assurance challenges of those in use are different to those far in the future. In the near term, one of the biggest challenges is organisational readiness - whether people know what they need to in order to successfully deploy AI technologies and whether they can meet those needs. Currently, many organisations don’t know what policies and strategies are needed for the assurance of these technologies in operation. There is a lot of enthusiasm to adopt AI technologies but, equally, there’s a lack of understanding of what is required.

The second immediate challenge is the changing regulatory landscape because the technology changes and this can be very confusing when many guidance or standardisation documents are pushed out at a fast pace. This creates uncertainty and is where policy makers, researchers and developers need to come together.

For future AI technologies and their use we are looking at societal and existential risks. Our current methods for traditional risk assessment processes are likely not well suited to assess and manage societal risks of future technologies. As we don’t know what these systems and their wider socio-political environment will look like, we need assurance that we can respond to changing circumstances. We need novel methods for assurance that can deal with this uncertainty, and we need new conceptualisations of risk. This requires a different approach to doing assurance, and regulation needs to be more responsive. 

New conceptualisation of the risk concept means that lay people need to be consulted and have a say. Assurance and regulation will need to rely on a broader societal dialogue about risk given the inherent uncertainty of these technologies.

So, at the Centre, we are researching these issues under the umbrella term of human-centred AI. Human-centred AI represents a systems-based approach to designing and assuring AI technologies. 

Finally, when you’re not working where can we find you?

I like to go to the gym where you can find me on the treadmill, and I like to go walking. Where I live we have nice surrounding areas and the RHS Garden Wisley - if possible I would even have meetings there! I like activities that free my mind to do creative thinking. I subscribe to the old scholar’s way of strolling and thinking.

 

If you’d like to speak to Professor Sujan about opportunities for collaboration and projects around systems safety engineering and safe AI, please get in touch via our webform.