Our team
Our expert team comprises the foremost experts in the world on safety and safety critical systems. We are wholly focused on assuring the safety of AI-enabled autonomous systems. Our team has both industry and research experience, which builds on a long heritage of research and innovation in safety-critical systems at York.
Learn more about our team, their individual research interests and contact information below.
Leadership team
Professor McDermid has worked on safety of complex, computer and software-controlled systems for almost forty years, leading major research initiatives and acting as advisor to industry and government on several continents. He first started work on safety of AI-controlled systems in the early 2000s (neural networks for engine control for Rolls-Royce, adaptive flight control for NASA).
Since January 2018 he has led a major initiative supported by the Lloyd’s Register Foundation, addressing the safety and assurance of AI and autonomous systems across a wide range of domains (e.g. health, autonomous vehicles, maritime, mining/quarrying) and with global reach.
His recent activities around AI safety include advising the UK Department of Transport and Law Commission on safety and ethics for the introduction of self-driving vehicles to the UK; advising on strategy for AI and Data Science for the UK’s National Physical Laboratory; advising the UK’s Health and Safety Executive (HSE) on their strategy for assuring and regulating AI; presenting at two fringe events for the first International AI Safety summit and he was a Senior Advisor in the production of the International Scientific Report on the Safety of Advanced AI for the second AI Safety Summit in Korea.
Professor McDermid has supervised around 40 PhD students and published almost 500 papers and books. He is also a Non-Executive Director of the HSE and has advised FiveAI (autonomous driving) and SAIF Systems (safe flight control).
Visit John McDermid's profile on the York Research Database to see publications, projects, collaborators, related work and more.
Send an e-mail to Professor McDermid using our contact form.
Professor Habli previously worked at the Rolls-Royce UTC in Systems and Software Engineering. Prior to that, he worked on a number of large-scale Geographic Information Systems, mainly for the energy sector. In 2009, Professor Habli completed his PhD on model-based assurance of safety-critical product lines in the department of Computer Science, University of York.
His work is highly interdisciplinary and involves active collaborations with ethicists, lawyers, clinicians, health scientists and economists. He conducts empirical and industry-based research and has co-led research work with engineers in organisations including Rolls-Royce, NASA and Jaguar Land Rover. In 2015, he was awarded a Royal Academy of Engineering Industrial Fellowship through which he collaborated with the NHS on evidence-based means for assuring the safety of digital health systems. He also works closely with the National Health Service in England on safety assurance of AI in healthcare.
Professor Habli’s expertise is in the design and assurance of safety-critical systems, with a particular focus on AI and autonomous systems (e.g. for clinical diagnosis and autonomous and connected driving).
Visit Professor Habli’s profile on the York Research Database to see publications, projects, collaborators, related work and more.
After initially spending some time developing telecommunications software, Professor Burton earned his PhD from York whilst working as a Research Associate within the High Integrity Systems Engineering group. He then spent several decades managing research, development and consulting organisations in industry including at DaimlerChrysler (Mercedes) and Bosch. Most recently he was scientific director for safety assurance at the Fraunhofer Institute of Cognitive Systems. In this role, Professor Burton facilitates the transfer of the Centre’s world leading research results to industrial applications.
He is very active in several standardisation committees and he is convenor of the International Standards Organisation (ISO) working group on safety and AI for road vehicles. This work has included leading the development of the ISO PAS 8800 Road vehicles - Safety and artificial intelligence standard.
Professor Burton’s research explores the intersection of systems safety engineering, artificial intelligence and the legal/ethical and regulatory considerations necessary to form convincing safety assurance arguments for complex, autonomous and AI-based systems.
Visit Professor Burton’s profile on the York Research Database to see publications, projects, collaborators, related work and more.
Dr MacIntosh is responsible for securing and delivering large initiatives in the area of safety assurance of autonomy and artificial intelligence. In particular, Dr MacIntosh holds responsibility for the strategic development, governance, finance, operations, communications, external partnerships, financial sustainability, and policy engagement of the Centre for Assuring Autonomy, and acts as Director of Operations and co-investigator on the £16.2M UKRI AI Centre for Doctoral Training in Lifelong Safety Assurance of AI-enabled Autonomous Systems (SAINTS).
Dr MacIntosh has held leadership positions in robotics, autonomy and AI for 10 years, including managing the development of strategic partnerships with major organisations across both the private and public sector. From 2018 to 2024 she led the delivery of the Assuring Autonomy International Programme, including managing a large portfolio of collaborative research and sub-awards. During this time, she was pivotal in securing investment for York’s flagship Institute for Safe Autonomy, also acting as the university's lead client for the design and build of this bespoke living lab, developing a case for investment and ensuring the laboratory facilities were fit for purpose. Her background spans science, engineering and medicine, and she has previously held a number of management and leadership roles where she has advocated for collaborative and multidisciplinary working.
Academic team
Dr Alexander is a Senior Lecturer in the High Integrity Systems Engineering (HISE) group, Department of Computer Science. His main research focus is automated testing, with a particular interest in the safety validation of autonomous robots. He is also interested in simulation methods for hazard analysis and empirical evaluation of safety engineering methods and techniques.
He is currently supervising research projects on developer experience of security static analysis tools, the impact of advanced control systems on process plant safety engineering, and automated robot testing using simulation and situation-based coverage. He has published over 50 papers on these and related topics.
Visit Dr Alexander’s profile on the York Research Database to see publications, projects, collaborators, related work and more.
Professor Calinescu leads a research team developing theory and tools for the formal verification and development of autonomous and AI systems with applications in domains including healthcare, social care, and environment protection.
He is the Principal Investigator of the £3m UKRI Trustworthy Autonomous Systems Node in Resilience, and was a technical co-lead of the £12m Assuring Autonomy International Programme. He pioneered formal methods paradigms including model checking with confidence intervals, compositional parametric model checking thorough model fragmentation, and formal specification, validation and verification of social, legal, ethical, empathetic, and cultural requirements for AI and autonomous systems.
Profesor Calinescu has published over 200 research papers, chaired leading conferences including ICECCS, SEFM, SEAMS and ICSA, and is an Associate Editor for ACM Computing Surveys and ACM Transactions on Autonomous and Adaptive Systems. He received a British Computer Society Distinguished Dissertation Award for his University of Oxford doctoral research, is a Senior IEEE member, and is a founding member of the IEEE Verification of Autonomous Systems standards working group.
Visit Professor Calinsecu’s profile on the York Research Database to see publications, projects, collaborators, related work and more.
Send an e-mail to Professor Calinescu using our contact form.
Dr Fang is a Lecturer specialising in the development of trustworthy AI-enabled autonomous systems. He has a Bachelor of Engineering degree in Information Technology from Vassa Polytechnic, a Master of Science degree in Geoinformatics from the University of Twente, and a PhD in Computer Science from the University of York.
Throughout his career, he has employed advanced machine learning and statistical methods to address the challenges presented by uncertainties at various stages of an autonomous system’s lifecycle. These results are published in prestigious conferences and journals in various disciplines such as EWSN, ICSE, TSE, and Sensors and Actuators B.
His research is dedicated to extracting critical insights from raw sensing data, effectively identifying and mitigating uncertainties, efficiently monitoring and verifying system compliance, and implementing adaptive actions in a timely manner. He is interested in exploring the synergy of these techniques to innovate new methods that enhance the trustworthiness of autonomous systems.
Visit Dr Fang’s profile on the York Research Database to see publications, projects, collaborators, related work and more.
Prior to taking up his current role, Dr Hawkins was Senior Research Fellow in the Assuring Autonomy International Programme where he investigated the assurance and regulation of robotic and autonomous systems. He has been working with safety related systems for 20 years both in academia and in industry.
Dr Hawkins has previously worked as a software safety engineer for BAE Systems and as a safety advisor in the nuclear industry.
His research focuses on safety assurance and safety cases for AI-enabled autonomous systems.
Visit Dr Hawkins’ profile on the York Research Database to see publications, projects, collaborators, related work and more.
Dr Jia is a Lecturer and her research spans the interdisciplinary fields of Artificial Intelligence (AI), safety assurance, and healthcare. She has worked on a range of AI-based clinical decision-support systems (CDSS) in areas such as sepsis management, weaning from mechanical ventilation, and cancer diagnosis.
Her current research focuses on developing human-centred explanations for CDSS. As the principal investigator of a project dedicated to creating and evaluating various forms of explanations, she is at the forefront of enhancing the interpretability and safety of AI in clinical settings.
Dr. Jia holds a PhD in Computer Science from the University of York and a Research Master’s degree from the Chinese Academy of Sciences. With her extensive academic and research background, Dr. Jia is pioneering the development of safe and effective AI-based CDSS that improve patient care and quality.
Visit Dr Jia’s profile on the York Research Database to see publications, projects, collaborators, related work and more.
Dr Nicholson is Reader in System Safety Engineering and an educator, researcher, standards developer and consultant in system safety engineering. He is a Director of the U.K. Safety Critical Systems Club, and he aims to improve the competency of individuals that fulfil roles as industrial practitioners or regulators responsible for safety in complex safety critical systems.
Dr Nicholson is author of industrial standards such as ARP4754a: Guidelines For Development Of Civil Aircraft and Systems and PAS 1882: Data collection and management for automated vehicle trials for the purpose of incident investigation. He also developed the world's first MSc module on the Safety Assurance of Robotics and Autonomous Systems (taught at University of York) and developed and is lead presenter of autonomous systems and machine learning safety CPD courses to organisations including the VCA, MCA, RSSB and NHS and BAE Systems. He is a co-author of the book “Data-Centric Safety”.
His research interests focus on safety assurance of autonomous systems especially in the area of data safety. Other work includes operational safety cases, assessing safety engineering practice, and inclusion of EMC risk as part of the safety assurance process.
Send an e-mail to Dr Nicholson using our contact form.
Dr Paterson is a Senior Lecturer. He obtained his PhD in Control Theory in 1993 then spent a brief period in academic research roles before moving into industry working as a programmer, team leader, project manager and ultimately as the Technical Director for a software consultancy unit within a large accountancy firm where he developed and delivered bespoke internet-based solutions. He returned to academia in 2014 to study for his PhD in Computer Science at the University of York and has worked in various research roles looking at machine learning and its role in autonomous systems.
His primary research considers the safety of autonomous systems. He is particularly focused on developing autonomous systems capable of operating in real-world contexts where uncertainty can not be avoided. This includes uncertainty in their understanding of the current world state and uncertainty in the models they use to reason over future actions.
Visit Dr Paterson’s profile on the York Research Database to see publications, projects, collaborators, related work and more.
By background, Professor Sujan is a Chartered Ergonomist and Human Factors specialist (C.ErgHF). He has worked across a number of safety-critical industries for over 25 years.
Professor Sujan is a Trustee of the Chartered Institute of Ergonomics and Human Factors (CIEHF), which is the professional membership body for Human Factors and Ergonomics in the UK. He chairs the CIEHF special interest group on AI and Digital Health.
He is co-author of the book "Building Safer Healthcare Systems", which forms the basis for the national patient safety syllabus adopted by NHS England. He also supports the delivery of the patient safety syllabus.
Professor Sujan is interested in developing and exploring methods for the safety assurance of systems, which include AI and machine learning. More broadly, his research interests are novel safety science approaches for socio-technical systems and the application of human factors / ergonomics approaches to real-world problems.
Visit Professor Sujan's profile on the York Research Database to see publications, projects, collaborators, related work and more.
Research team
Dr Clegg is Research and Innovation Fellow. He teaches introductory courses on the safety of machine learning and related topics, including explainable AI (XAI), bias, uncertainty and fairness in AI. His current research interests include large language models or other foundation models, generative AI toolchains such as langchain / GPT4 and safeguarding user interaction with machine learning.
Often collaborating with industrial partners, such as Rolls-Royce, he has worked on projects as diverse as model-based development of failure logic for civil aviation gas turbines, geofencing safe corridors in rapidly reconfigurable factories and using machine search to expose risk in air traffic control sectors.
He is a member of a special interest group on Safer Generative AI (SGAI) with CfAA. Dr Clegg is interested in reasoning capability of language models, verifying RAG / fine-tuning for local LLMs, repeatability and safeguarding. He is also interested in new architectures or alternatives to transformers to support better reasoning, e.g. H-JEPA to enable analytics of complex failure logic, and safety of LLMs as de facto operating systems in devices, control interfaces and scene recognition.
Visit Dr Clegg’s profile on the York Research Database to see publications, projects, collaborators, related work and more.
Yash Deo is Research Associate in Lifelong AI Safety. During his PhD in generative AI for medical imaging at the University of Leeds he worked on creating virtual populations for in-silico trials. He hopes to have submitted the viva by December 2024. Prior to that he completed a Masters in Advanced Computer Science and interned as an investment analyst at the Creator Fund.
Alongside his focus on lifelong AI safety, Yash Deo is also working on how to effectively evaluate AI health models.
Visit Yash Deo's profile on the York Research Database to see publications, projects, collaborators, related work and more.
Dr Fatehi is a Research Associate specialising in deep learning, reinforcement learning and XAI. He earned his PhD from the Computer Vision Lab at the University of Nottingham, where his primary research focused on leveraging deep learning techniques for low-resource automatic speech recognition systems.
His current research interests lie in the application of explainable reinforcement learning within the context of 6G networks.
Visit Dr Fatehi's profile on the York Research Database to see publications, projects, collaborators, related work and more.
Dr Fearnley is Research Associate in Safe and Ethical AI. She holds a PhD in Philosophy from the University of Glasgow and her research background is in ethics and causation. During her PhD she collaborated on several interdisciplinary projects involving the ethics of technology, and these collaborations led her to specialise in the philosophical aspects of AI. Dr Fearnley joins the CfAA from Charles University, Prague where she held her first post-doctoral position working on roles and responsibility.
Her work at the Centre will use her background in moral philosophy and causation to focus on identifying AI-related harms, tracing accountability and assuring transparency. Her research interests include safety work in assurance, explainable AI methods, and responsibility attribution.
Visit Dr Fearnley's profile on the York Research Database to see publications, projects, collaborators, related work and more.
Jonathan is a Research Software Engineer in the Assuring Autonomy International Programme, with a focus on safety and recoverability of autonomous mobile platforms. He joined the AAIP after completing his Master’s degree in Data Science and Artificial Intelligence at the University of Hull. He is particularly interested in how robotics and autonomous systems can be implemented into our daily lives to improve and ease the stress of living, particularly for elderly and disabled individuals.
Dr Hodge is Senior Researcher (Research Fellow), a computer scientist and software engineer. She holds a PhD in Computer Science from the University of York. She has worked in industry as a software architect for medical diagnostics products and as a software engineer on applications including condition monitoring and anomaly detection in industrial environments, and deep reinforcement learning for robot navigation.
Dr Hodge currently works on a number of projects in through-life safety assurance of artificial intelligence (AI) for autonomous systems - with particular focus on safe robotics platforms, and assuring AI and robots in uncertain environments. She has authored over 70 publications covering: AI; machine learning; anomaly detection; data analytics frameworks; neural networks; robotics and safety assurance.
Her research and software development focuses on AI, anomaly detection, machine learning and safety assurance across a variety of domains focused on robotics and autonomy.
Visit Dr Hodge’s profile on the York Research Database to see publications, projects, collaborators, related work and more.
Dr Hughes is a Research Associate who joins CfAA from the Department of Computer Science, University of York, where their PhD focused on how video game user experiences are enabled when both player and game share autonomy over what interactions occur. They hold an MPsych in Psychology from the University of York.
They have a research focus on the safety of human-AI teaming, and are particularly interested in how users experience automated technology, how they implement their goals, and how users describe their interactions. These interests have previously been applied to the aviation and healthcare sectors.
Visit Dr Hughes’s profile on the York Research Database to see publications, projects, collaborators, related work and more.
Dr Imrie completed his PhD at the University of Edinburgh, where he investigated low-level controllers, emotional robotics, reinforcement learning, and information theoretic approaches to robot control. He joined the Assuring Autonomy International Programme at the University of York in 2021, and is currently a Research and Innovation associate with the Centre for Assuring Autonomy.
Dr Imrie's research interests include the safety and practicalities of deploying a variety of robotics and AI systems, capturing the uncertainty of machine learning (ML) with a particular interest in reinforcement learning, and the emergence of behaviours in multi-agent/swarm systems.
He has worked on numerous research projects such as correct-by-construction controller synthesis with ML perception components, and safe reinforcement learning with validation indicators. Dr Imrie has also conducted applied research, including the ASPEN project which investigated how to exploit AI and autonomous systems for forest health maintenance.
Visit Dr Imrie’s profile on the York Research Database to see publications, projects, collaborators, related work and more.
Dr John Molloy joined the AAIP as a Research Fellow in 2020. His work includes the assurance of understanding and perception suites in autonomous systems, and the effects of environmental factors on sensor performance.
Previously, he spent ten years in the Electromagnetic Technologies Group at the National Physical Laboratory (NPL), where he was technical lead for autonomous systems, and future communications. He delivered several projects on perception in vehicular and marine autonomous systems including two reports on behalf of CCAV. He provided technical consultancy for external customers such as Lloyd’s Register, Osram GmbH and the USAF.
From 2015-2020 he was visiting research scientist at the Hyper Terahertz Facility, University of Surrey, providing technical expertise on the development of high-power laser and electro-optic systems for remote sensing and quantum applications.
John received his Eng.D in Applied Photonics from Heriot-Watt, Scotland (2016) for work on Nonlinear Optics and development of solid-state THz Optical Paramedic Oscillators (OPOs). He received his MSc Photonics & Optoelectronic Devices from St. Andrews & Heriot-Watt, Scotland (2010) and his BSc (Hons) Physics & Astronomy, NUI, Galway, Ireland (2006)
Previously in industry, he worked as a laser engineer, manufacturing diode-pumped solid-state lasers, and as an electronic engineer, developing high-speed high-capacity storage solutions for digital media.
Dr Preston is a Research Associate in Safe Autonomy in AI. She holds a PhD in developing healthcare AI technology using a human factors perspective from the University of Strathclyde, Glasgow, from where she joined the Centre. Dr Preston is co-chair of the Chartered Institute of Ergonomics and Human Factors AI and digital health special interest group.
Her research interests are in human-centred assurance, development and organisational readiness for future AI technologies.
Dr Ryan is Lloyd’s Register Foundation Senior Research Fellow and has worked in the field of safety-critical software engineering for over 25 years, both in academia and industry. Her research revolves around assuring through-life safety for complex and dynamic software including autonomous systems and machine learning.
She has authored and co-authored dozens of papers about software safety in multiple applications, including autonomy and artificial intelligence. She also has extensive industrial experience, writing operational safety cases, performing dynamic and static software code analysis, acting as an independent safety advisor and contributing to safety standards.
Dr Ryan has successfully secured funding for, and led, multiple research and industrial projects, working with partners from diverse disciplines and backgrounds. She is a chartered engineer and her PhD examined safety analysis of operating systems. Philippa was previously chair of the Safety Critical Systems Club working group developing guidance for assuring autonomous systems.
Visit Dr Ryan's profile on the York Research Database to see publications, projects, collaborators, related work and more.
Dr Shahbeigi is Research Associate in AI Safety Assurance. She completed her PhD at Warwick Manufacturing Group, University of Warwick, where she focused on enhancing the safety and reliability of autonomous and automated driving systems.
Her current research involves developing methodologies for specifying and validating requirements on machine learning components of autonomous systems.
Visit Dr Shahbeigi’s profile on the York Research Database to see publications, projects, collaborators, related work and more.
Dr Stefanakos is Postdoctoral Research and Innovation Associate. He earned his PhD in Computer Science from the University of York in 2021, focusing on software analysis and refactoring using probabilistic modelling and performance antipatterns. In his current role he focuses on the safety assurance of autonomous and self-adaptive systems.
Dr Stefanakos has engaged in significant research, including developing a diagnostic AI system for robot-assisted A&E triage and ensuring the safe deployment of assistive care robots to support elderly individuals. His collaborative research on analysing and debugging normative requirements has earned an ACM Distinguished Paper Award. Additionally, he co-supervises a PhD student on the lifelong assurance of online learning for robotic and autonomous systems, and has guest lectured on MSc courses.
He is deeply interested in formal methods to ensure the reliability and safety of autonomous and self-adaptive systems in dynamic environments. His research centres on modelling and verifying human-robot interactions in diverse contexts, such as manufacturing and healthcare, aiming to ensure safety during the runtime adaptation of robot configuration and behaviour.
Visit Dr Stefanakos’s profile on the York Research Database to see publications, projects, collaborators, related work and more.
Dr Vazquez is a Research Associate whose work focuses on the task scheduling problem in multi-robot systems; the formalisation of requirements; prediction of probabilistic requirement violations and self-adaptation of the system. She received her MSc in Computational Intelligence and Robotics at the University of Sheffield with distinction.
Her research interests include formal methods, multi-robot systems (MRS), task allocation and planning, domain-specific languages for MRS, autonomous systems ethical concerns, and self-adaptive and critical systems.
Dr Vazquez is currently part of the Engineering and Physical Sciences Research Council ‘Trustworthy Autonomous Systems Node in Resilience’, as well as the Horizon Europe project AI4WORK. As an early career researcher, she has published in the top computer science journal, Transitions of Software Engineering, and conferences such as NASA Formal Methods. Dr Vazquez has collaborated with the Robust Software Engineering group at NASA AMES Research Centre; and is currently collaborating with multiple industrial and academic partners as part of the AI4WORK initiative.
Visit Dr Vazquez’s profile on the York Research Database to see publications, projects, collaborators, related work and more.
Programme management and administration
Chrysle joined the Assuring Autonomy International Programme as the Administrator at its inception in January 2018, having previously been Head of Department's PA in the Department of Computer Science, and now provides administrative support for the Centre for Assuring Autonomy.
Bob is the Centre for Assuring Autonomy’s Research and Innovation Manager. With over 15 years’ experience in the Higher Education sector, Bob has worked with funders and collaborators throughout the world, covering many and disparate fields of work. He has supported projects spanning research, consultancy, knowledge exchange and education, ranging in size from small individual fellowships, to multi-million pound grants.
Bob provides support for a portfolio of projects within, and aligned to, the CfAA. He carries out flexible project management, delivering robust monitoring and support across grant set-up, contract negotiations, financial management and reporting.
Before starting work in HE, Bob had a career in the hospitality industry, and – very briefly – as a gravedigger. He believes that this gives him a particularly useful, and focused sense of perspective…
Sara manages the communication and media for the Centre for Assuring Autonomy as Communication and Impact Manager. A Chartered PR Professional, she worked in agency PR for most of her career, eventually running her own agency specialising in working with start up and spin out companies in sectors including renewable energy and robotics.
She has been a visiting lecturer in Digital Marketing and has a special interest in EDI within the PR sector, speaking at industry events and running EDI training for PR professionals. She is an active member of the CIPR and is currently the Chair of the CIPR Yorkshire and Lincolnshire Committee, a member of the Diversity and Inclusion Network, and a trainer for The Blueprint.
Jennifer is Communication Assistant and joins the Centre from her previous Higher Education Institute where she worked in student recruitment and marketing. She works closely with the Communication and Impact Manager to provide communications support to colleagues in the Centre. This includes the creation, development and delivery of communication projects and campaigns for different stakeholder groups.
Research students
John is a part-time PhD research student. His research is within the area of the safety assurance challenges of machine learning systems, with a focus on the cost and reward mechanisms associated with reinforcement learning. John is a recent graduate of our MSc in Safety-Critical Systems Engineering and also holds a MEng (Hons) in Systems Engineering from Loughborough University. John is a chartered engineer and works full-time in industry as a product safety specialist and technical expert in software safety. John is also a member of several safety-related bodies including MISRA C++, and the SCSC’s Data Safety Initiative and Safety of Autonomous Systems working groups.
Hasan Bin Firoz joined the University as an Early Stage Researcher in the MSCA-ETN SAS Project in 2022. He completed his BSc in Electrical and Electronic Engineering at the University of Dhaka, Bangladesh, in 2017. He earned his MEng in Electronics and Communication Engineering from Shanghai Jiao Tong University in 2020.
During his Masters programme, Hasan worked as a research intern at the BeiDou Research Institute in Shanghai, focusing on designing a deep neural network-based controller for improved quadrotor trajectory tracking. After graduation, he joined a software company in Shanghai and worked on a Microsoft project for nearly two years.
Hasan’s PhD is fully funded by the Doctoral Centre for Safe, Ethical and Secure Computing (SEtS). HIs research focuses on safe decision-making for mobile autonomous systems; control systems and artificial intelligence and machine learning.
Brendan is currently studying the viability of plasma-physics-informed maintenance systems for nuclear fusion power plants. Prior to starting his PhD at University of York, Brendan graduated from UCL with an MSci in Physics, with a final-year project focusing on antimatter physics. During an internship with a private, UK-based fusion energy company, he developed an interest in fusion as a clean and efficient power source.
Brendan's project is highly intersectional and combines computer science, high-performance computing (HPC), plasma physics and fusion engineering. The intent of the project is to analyse the challenges towards safe automated maintenance of future fusion power plants, thereby tackling a key barrier to practical fusion energy. To this end Brendan studies the safety concerns of fusion from a full-plant point of view, taking into account constraints related to the physics and chemistry of fusion. Brendan also uses plasma simulation HPC packages to investigate hypothetical maintenance scenarios which may occur within the reactor itself.
Brendan's research interests include machine learning, robotics in challenging environments, tokamak engineering, high-performance computing, plasma physics, space weather, and gravitational physics.
Jane Fenn is a part-time PhD research student (funded by BAE Systems) who has worked at BAE Systems since 1989. She discovered a passion for Safety Engineering early in her career which she has nurtured through academic study and participating in joint industry/academic research.
Jane works within the Air Sector where she is involved with establishing the safety of experimental aircraft programmes and associated new technologies. She works on several standards committees, including SAE/EUROCAE G-34/WG-114 – AI in Aviation.
She is a Fellow of IET and SaRS, a BAE Systems Global Engineering Fellow and Licensed Technologist in Robotics and Autonomy Safety Engineering. Her PhD is looking at the transition between design time and operational Safety Cases, with a first paper on “A New Approach to Creating Clear Operational Safety Arguments” included in the SASSUR workshop at SafeComp 2024.
Josh is a Postgraduate Researcher. He has a keen interest in railway technologies and safety and began his academic career as an artificial intelligence student at Northumbria University, where he developed a strong interest in the practical applications of AI.
Josh has collaborated with Professors Simon Burton and John McDermid to publish multiple works demonstrating and exploring the Safe Autonomy of Complex Railway Environments within a Digital Space (SACRED) methodology. Additionally, he has engaged in interdisciplinary research with the York Psychology department on the Wellcome Leap 1Kd project.
Vira is a chartered chemical engineer with over 19 years’ experience, working in a wide range of process industries. Her career has involved designing, constructing, and commissioning chemical plants in the UK and Europe. Vira is a process safety specialist, a qualified HAZOP lead and provides consultancy services in the chemical and pharmaceutical industry.
As a doctoral research student at York, Vira is researching the safety implications of advanced control techniques being adopted in the chemical industry.
Shakir Laher was a safety engineer at NHS Digital, before moving to The Alan Turing Institute as a research application manager. He has experience in developing software and leading research projects in AI/machine learning assurance focused on safety. He holds degrees in Information Technology, Computer Science and Educational Pedagogy, with his most recent qualification being an MSc in Computing.
Shakir is a member of several research and special interest groups, such as the Safety of Autonomous Systems Working Group and the British Standards Institution’s AI standards mirror committee ART/1. Through these connections he has authored sections of BS 30440:2023 - Validation framework for the use of artificial intelligence within healthcare.
His research interests include autonomous systems, safety engineering, assurance cases, argumentation patterns, AI governance and regulation and AI in healthcare.
Nawshin’s research explores areas such as sensor technology, fusion techniques, calibration, safety assessment, human-machine interaction, and ethical considerations. With a strong interest in safety and risk assessment, she strives to develop methodologies for validating and verifying the safety of autonomous systems.
Send an e-mail to Nawshin Mannan Proma using our contact form.
Berk earned his Bachelor's degree in Industrial Engineering from both Istanbul Technical University and Southern Illinois University of Edwardsville (SIUE) in 2019 and Masters degree in Industrial Engineering from SIUE.
During his Masters he worked on data science, applied statistics, and optimisation as a Teaching and Research Assistant. Berk’s current research project focuses on developing a safety-critical personalised AI-based support model targeting interventions for patients with Type 2 diabetes at the highest risk of developing multimorbidity.
Tejas completed his Masters degree in Computer Science from Trinity College Dublin in graphics and vision technologies and was previously a Deep Learning Researcher at Intel, working on robotics and computer vision solutions.
He is a multi-domain researcher, bridging the gap between computer vision and safety and involved with the Vision, Graphics and Learning Group as well as the High Integrity Systems Research Group. His research is primarily focused on assuring the safety of autonomous vehicles.
Tejas’ research focuses on computer graphics, computer vision, autonomous vehicles and embedded programming.
Annabelle is studying how social, legal, ethical, empathetic, and cultural requirements can be upheld by robotic systems operating under uncertainty, supervised by Professor Ana Cavalcanti and Professor Radu Calinescu. Previously she obtained a Masters degree in Cyber Security, and a Bachelors degree in Computer Science from the University of York, and has over six years’ of experience in software development with different programming languages.
Annabelle is a member of the ‘Trustworthy Autonomous Systems Node in Resilience’ project and has presented the progress of her research, as well as contributing to discussions regarding the project’s research interests. She completed my Masters dissertation in 2023 on Detecting and Mitigating Data Anomalies in Self-Adaptive Systems.
Her research focuses on formalisation and validation of non-functional requirements; parametric and probabilistic model checking; model driven engineering methods for the verification and synthesis of controllers for autonomous and self-adaptive systems.
Josh is a PhD Research Student on the Assuring Autonomy International Programme. His research is focussed on safety within multi-agent reinforcement learning primarily for robotic teams. Prior to joining the University of York Josh obtained an MSc in Advanced Computer Science from the University of Birmingham, focusing his work on intelligent mobile robotics and multi-robot systems in both theoretical and practical settings. Much of this work was built upon Josh's foundations within game theory and reinforcement learning, which was gained through a BSc in Computer Science (Games Development) at the University of Wolverhampton.
Isobel earned her Bachelors degree from the University of York in 2021 and, after graduating, secured an internship with YorRobots and a position as a Research Assistant with the Assuring Autonomy International Programme.
Her PhD research focuses on eliciting insights from our current and emerging understanding of common sense to facilitate the development of autonomous systems with higher levels of resilience than currently possible. Isobel is a member of the Trustworthy Autonomous Systems node in resilience.
Her research interests include common sense and intuition-based reasoning, ethics and moral psychology, and decision-making under uncertainty.
Georgia completed her BSc in Mathematics at the University of York in Summer 2022 and began her PhD in October 2022.
Her PhD project aims to develop key components of an end-to-end hybrid AI solution for the triage of Emergency Department patients. Her research builds on the Diagnostic AI System for Robot-Assisted A&E Triage (DAISY) project.
Her research interests include Clinical Decision Support System (CDSS), Bayesian networks and machine learning.
Mohammad Tishehzan is an Early Stage Researcher in ETN- PETER Project in the department of Computer Science at the University of York. As ESR11, he carries out research on “Modelling and Reasoning about EMI (Electromagnetic Interference) Interactions in Autonomous and Complex Vessel”.
His primary goal is developing a through-life EMI risk-based modular safety case approach in a form suitable for all stakeholders in the industry. He obtained his Masters degree in electrical engineering (field and wave) in 2019. Before starting his PhD course in 2020, he had worked as EMC (Electromagnetic Compatibility) test engineer.
Send an e-mail to Mohammad Tishehzan using our contact form.
Bernard spent 12 years in the Merchant Navy before attending Loughborough University, where he earned a degree in Electro-Mechanical Power Engineering. In 1993, he joined Lloyd’s Register, and in 2007, he became the Global Head of Electro-Technical Systems within the Technology Directorate.
Since 2017, Bernard has focused on the Regulatory Development for the Maritime Autonomous Infrastructure. He has published technical papers on Maritime Electro-Technical systems and serves on the technical committees of two leading classification societies, international committees, and is a member of ISO/BSI, CIMAC, and Sea Europe. He is an advisor to the Danish Maritime Association at the IMO, and began a part-time PhD in Computer Science at the University of York in 2018.
Qi Zhang’s research focuses on the integration of reinforcement learning and formal methods for autonomous robots in dynamic environments, with an emphasis on healthcare applications. His research aims to develop safe learning methods within self-adaptive systems, enabling robots to autonomously adapt to changing conditions while maintaining safety guarantees.
He is a member of the Trustworthy Adaptive and Autonomous Systems & Processes (TASP) Research Team at the University of York.
CfAA Fellows
Dr Osborne is a Visiting Fellow in the Centre for Assuring Autonomy, where he is currently researching software safety assurance in AI, and the safety of decision-making in autonomy. Prior to joining the Centre, Matt worked for many years as an independent consultant in assuring the safety of software in complex, socio-technical systems.
Dr Osborne's PhD research involves identifying the impediments to recognised good practice in software safety assurance; with the aim of identifying, characterising, and eradicating the impediments to the adoption of best practice for software safety assurance