Spotlight profile: Dr Victoria Hodge

News | Posted on Tuesday 22 October 2024

Dr Hodge is a Centre Research Fellow with an accomplished and multifaceted career. Here, she tells us how this has proven to be a valuable asset in her work, and how multidisciplinary collaboration with a broad range of people continuously provides her with fresh perspectives for both her research and ways of working.

A white woman with short brown hair wearing safety glasses is in a lab holding a large remote control. She's holding one had out in a stop action, behind her is computer equipment

Can you tell us about your research interests?

The main focus of my research has always been AI/ML applications and algorithms. I have used AI in a broad range of applications including matching trademark images to look for copyright issues, analysing the data collected from road traffic and using the results to identify the best setting for traffic controls at junctions, and analysing video game and sports data to create stories for viewers to explain to them what they are watching. 

More recently I have focused on the safety of AI for robotics looking at how we can assure the safety of drones (uncrewed aerial vehicles - UAVs) and autonomous ground robots for inspection and search and rescue in environments including mines, forests, factories and industrial infrastructure. I am currently working on developing a robotics platform with multiple robots to navigate, inspect and maintain a solar farm. This includes programming robots to sense their environment; understand their state and environment from what they sense; decide what to do according to their current state and goal; and perform the chosen action. This all has to be done safely even though the real world is ever changing and the robot will have to understand and handle situations it has never encountered before.

I have also worked in industry on software engineering projects for monitoring industrial environments and developing medical diagnostics products to diagnose neurodegenerative conditions. I am keen to use industry-standard software engineering frameworks such as Agile and end-to-end development in my research work in the University.

What’s the most interesting project you’ve worked on and why?

Last year I led a team who organised a Hackathon project to assure the safety of navigating UAVs in challenging environments such as mines  - this was something quite different from my normal research. It required working in and organising teams to work on tasks including devising a use case feasible for 30 to 40 people to work on; arranging the Hackathon - everything from writing legal documents (like terms and conditions) to organising food and accommodation; building a mock mine in a laboratory at the Institute for Safe Autonomy (where the Centre is based) for the participants to use; and preparing presentations to describe the task and engage participants. Organising a successful event is never possible without input from others, so I’m grateful for the help from technical staff and professional services staff at the University of York. 

There were a lot of outputs from the Hackathon: four direct academic publications and follow-on publications using the artefacts created were produced. The great thing about the Hackathon is that PGRs, particularly, are still using the software artefacts, and many researchers and PGRs will be using knowledge gained to improve their research.

Why is focusing on both safe robotics platforms and assuring AI and robots in uncertain environments important?

Robots are becoming autonomous across multiple domains and applications. This means there is limited or no human involvement in their operation. There’s a lot going on for these robots - they have to sense and understand their environment and task, decide what to do, and act, all without human oversight. Many operating domains are safety critical and robots operating in that domain can cause harm. This could be physical harm to humans, machinery or infrastructure, environmental harm, or ethical harm. Ensuring that the robot will cause no harm is where safety assurance comes in. Safety assurance reduces the risk to as low as reasonably possible. We need to have confidence, and be able to prove and justify, that an acceptable level of safety has been reached. Assuring the safety of AI and ML and proving safety is an important assurance challenge for researchers at York and more widely, for industry, standards bodies and regulators. 

To answer the second part of the question, the more dynamic the environment is, the more difficult it is to have confidence that an autonomous robot will always meet its safety guarantees. The real-world is ever changing, from the weather to day/night, to new buildings and road layouts, to people moving around in their daily lives. This creates uncertainty and unknown situations for robots operating there. We cannot know in advance what a robot will encounter yet the robots need to respond safely in all situations. When we design the robot we have to build in robustness and safe responses. Again, this is extremely challenging and is an important area of current safety assurance research at York. 

You work across many different projects in your role. How do you think this influences your ways of working?

During my research I have worked across multiple disciplines and worked with people from a broad range of disciplines and backgrounds, working with both academic, local government (such as City of York Council and TfL), and industrial collaborators such as ICL Boulby Mine, PA Media, Thomson-Reuters. This has allowed me to learn from their different ways of operating, understand alternative viewpoints, and gain new knowledge. This helps generate new ideas for research. I like to keep open-minded and consider all options. It helps with working in teams and makes everyone more collaborative - it is always important to take an all-round view of tasks and consider different angles before designing and developing a solution. Learning from multiple disciplines makes people more innovative through combining different perspectives and knowledge - this kind of working environment in the Centre is really beneficial for researchers and our partners. Problem solving in research should be a consensus using synthesis and integration of ideas and knowledge.

Finally, where can we find you when you’re not working?

My outside interests are as far removed from my day job as possible - much more relaxing and mindful. I enjoy a variety of arts and crafts, particularly painting - I also enjoy pottery and glass work though I have less time for these at the moment. I like to keep fit, particularly cycling and walking and occasional games of sport. I also enjoy reading (paper) books both fiction and non-fiction - I am currently reading “Good Strategy / Bad Strategy” by Richard P. Rumelt which I’m finding really insightful.

 

If you’d like to speak to Dr Hodge about opportunities for collaboration and projects around AI, anomaly detection, machine learning and safety assurance of robotics and autonomy, please get in touch via our webform.