In this meeting we will explore the topics of psychological safety and ethics. Mindful AI is an approach to developing and using artificial intelligence that prioritizes ethics, sustainability, and human well-being. It is important that individuals can feel safe to both challenge autonomous decisions made by AI and also to provide truthful information that will train and develop AI for safety related activities.
In this workshop we will introduce this topic and gather feedback from our participants on their perspectives of this emerging topic.
We'll be joined by Yaji Sripada, Senior Lecturer in the Department of Computing Science at Aberdeen University, speaking on the topics of 'Interpretability and Explainability of AI/ML Models'. This will include discussion on the challenges involved in explaining black-box AI /ML models and some of the current solutions available. Most modern AI/ML models are black boxes, raising concerns about their trustworthiness in operational use. Trustworthiness is one of the key components of Fairness, Ethics, Accountability and Transparency (FEAT), the essential properties of Mindful AI (or Responsible AI or Safe AI).
Other confirmed speakers include: Dr Olivia Angel, Technical Manager of Robotics & AI at Sellafield and Founder of Presence Engineering will be speaking on 'Mindful AI: Attention, Awareness and Intelligence' ; and Carl Dalby, the Group Head of AI & Digital at the Nuclear Decommissioning Authority (NDA) will be speaking on 'AI and Mindfulness: Designing Human‑Centred AI'.
When: Wednesday 22nd April, 10:00-12:30 (UK time)
Where: MS Teams (access will be provided in the registration confirmation email you will receive.)
Registration Deadline: Wednesday 22nd April, 10:00