Confirmed Workshops

We are pleased to advise that the following Workshops are confirmed for the 2024 programme and we would like to thank the organisers for the time they took to put forward and manage these sessions.

No.Workshop TitleRuntime (hours)Workshop Organiser
1The 17th International Workshop on Artificial Intelligence and Cybersecurity (AICS2024)3Ian Welch & Tao Ban
2AI Education3Michael Watts, Ranpreet Kaur & Akbar Ghobakhlou
3Neural Models of Infants and Child Development3Alistair Knott, Annette Henderson & Florian Bednarski
4Privacy Compliant Health Data As A Service For AI Development4.5Mufti Mahmud & Antti Arrola

The 17th International Workshop on Artificial Intelligence and Cybersecurity (AICS2024)

Date: 3 December

Part 1: Session 1D

Time: 11.00 - 12.30

Location: WG126

Part 2: Session 2D

Time: 13.30 - 15.00

Location: WG126

 

Led by Dr. Ian Welch (Victoria University of Wellington), Dr. Tao Ban (National Institute of Information and Communications Technology)

The purpose of the 17th International Artificial Intelligence and Cyber Security Workshop (AICS2024) is to raise awareness of cybersecurity, promote the potential of industrial applications, and give young researchers exposure to the main issues related to the topic and ongoing works in this area. AICS2024 will provide a forum for researchers, security experts, engineers, and research students to demonstrate new technologies, present the latest research works, share ideas, and discuss future directions in the fields of artificial intelligence and cybersecurity.

Speaker 1: 13.30 - 14.00 Kazushi Ikeda (NAIST, Japan) - Theoretical background of deep learning

 

 

Speaker 2: 14.00 - 14.30 Richard Kenyon (Datapay AI Labs, New Zealand) - Securing AI Chatbots in Enterprise Applications: Cybersecurity challenges for GenAIApplications in compliance driven industries like Payroll, Tax, and Employment

AI Education

Date: 3 December

Part 1: Session 1C

Time: 11.00 - 12.30

Location: WG308

Part 2: Session 2C

Time: 13.30 - 15.00

Location: WG308

 

Led by Dr. Michael Watts, Ranpreet Kaur (Media Design School), Dr. Akbar Ghobakhlou (Auckland University of Technology)

There has been an explosion in the applications of Artificial Intelligence (AI). While Large Language Models such as ChatGPT have garnered much of the attention, other AI technologies have also found wide application, such as the predictive keyboards on mobile devices, and facial recognition systems in supermarkets. Some technology venture capitalists have reported that 80% of the funding pitches they receive involve AI. Many business owners believe that AI is going to put them out of business, unless they adapt to the technology. Others are desperately searching for ways to get onto the AI bandwagon. This surge in interest in AI has led to a worldwide shortage of AI engineers. Furthermore, the inappropriate application of AI, whether through the use of biased data or unethical applications, has also led to social and economic fallout.
The increased public awareness of AI technologies has also led to a proliferation of media commentary, of varying degrees of competence, and governmental regulation. Some students have taken to using AI tools to assist in their assignments, while others have changed their career pathways due to a perception that AI is going to destroy their future job prospects.
There is, therefore, a need for education about AI. This need spans nearly all levels of education, from primary school through to postgraduates. At primary and secondary level so that people enter the working world with the basic knowledge of AI and how it affects their lives. At tertiary undergraduate and postgraduate level so that we have a steady supply of engineers and developers who can utilise AI in an appropriate and ethical manner.
This all raises a fundamental question: How is this education being done?
This special session is intended to attract papers dealing with all aspects of AI education. Topics of interest include, but are not limited to:

  • Incorporating AI into teaching curricula at all levels of education
  • The design and implementation of AI-specialist teaching curricula
  • Technologies used to teach AI
  • Teaching the ethics of AI
  • Policy making around AI education
  • The teaching of specialist topics within AI

Paper 1: 11.00 - 11.15 Zhenyu Xu, Victor S. Sheng, Kun Zhang  - Logic Error Localization in Student Programming Assignments Using Pseudocode and Graph Neural Networks

Paper 2: 11.15 - 11.30 Kirill Krinkin, Tatiana Berlenko - Flipped University: LLM-Assisted Lifelong Learning Environment

Speaker 1: 11.30 - 12.00 Michael Witbrock (University of Auckland, New Zealand) - AI and the End of Useful Skills

 

 

Speaker 2: 12.00 - 12.30 Irwin King (The Chinese University of Hong Kong, Hong Kong) - The Critical Role of AI in Learning Analytics and Assessment in the Future of Education

 

Speaker 3: 13.30 - 14.00 Vithya Yogarajan (University of Auckland) - Embracing AI in Tertiary Teaching

 

 

Speaker 4: 14.00 - 14.30 Jonathan Chan (King Mongkut’s University of Technology, Thailand) - Balancing AI and Human Interaction in Education

 

Speaker 5: 14.30 - 15.00 Mufti Mahmud (King Fahd University of Petroleum and Minerals, Saudi Arabia) - AI in Provisioning Personalised Learning Through Engagement Detection

Neural models of infants and child development

Date: 4 December

Part 1: Session 4D

Time: 11.00 - 12.30

Location: WG126

Part 2: Session 5D

Time: 13.30 - 15.00

Location: WG126

 

Led by Prof. Alistair Knott (Victoria University of Wellington), Prof. Annette Henderson (University of Auckland), Florian Bednarski (University of Auckland)

The dramatic advances of neural AI methods we have seen in the last few years are loosely based on the brain's distributed mode of computation, but are distinctively unhumanlike in the way they develop. LLMs, for instance, begin learning directly on vast quantities of unembodied mature adult language; it is only at a late stage that their learning is interactively shaped (by alignment) or becomes 'multimodal' (through interfaces with vision or action). By contrast, human infants' learning is fundamentally embodied: from birth, infants must learn to engage with the physical world, by meaningfully deploying their sensory and motor apparatus (Smith and Gasser, 2005). Infants' learning is also fundamentally staged, beginning with the acquisition of basic sensorimotor concepts and abilities, along with conceptions of close caregivers, and building on these (Vygotsky, 1994). Infants' learning is also interactive, driven by targeted real-time input from caregivers (Bornstein et al., 2008), but equally self-guided, driven by infants' own curiosity and experiences (Oudeyer et al., 2007).

There is a growing awareness that computational models of infant development may offer ways of augmenting the current generation of high-performing AI models. The session we propose will bring together researchers working on neural models of infant cognitive development, focussing on embodied learning, learning through interaction, self-guided learning, and staged learning. Crucially, the session will also invite participation from developmental psychologists. The work of psychologists studying development in human infants and children is newly relevant to work in AI, and their voices are increasingly heard in discussions about how AI should progress (see e.g. Smith, 2023; Gopnik and Chiang, 2024).

Speaker 1: 11:00-11:30 Mark Sagar (University of Auckland) - An introduction to BabyX

 

 

Speaker 2: 11:30-12:00 Alistair Knott: (Victoria University of Wellington) - Events and cognitive modes in BabyX

 

 

Speaker 3: 12:00-12:30 Florian Bednarski (University of Auckland) - Evaluating interactions with BabyX

 

 

Speaker 4: 1:30-2:30 Alison Gopnik (UC Berkeley): Causal Learning as Empowerment - Infant contingency learning as a model for AI

 

Speaker 5: 2:30-3:00 Martin Takac (Comenius University, Bratislava) - Under the hood of BabyX: cognitive architecture, emotions, active inference

Privacy Compliant Health Data As A Service For AI Development

Date: 5 December

Part 1: Session 7C

Time: 11.00 - 12.30

Location: WG208

Part 2: Session 8C

Time: 13.30 - 15.00

Location: WG308

Part 3: Session 9C

Time: 15.30 - 17.00

Location: WG308

Led by Dr. Mufti Mahmud (King Fahd University of Petroleum and Minerals, Saudi Arabia), Dr. Antti Arrola (University of Turku, Finland)

Artificial intelligence (AI) enables data-driven innovations in health care. AI systems, which process vast amounts of data quickly and in detail, show promise both as a tool for preventive health care and clinical decision-making. However, the distributed storage and limited access to health data form a barrier to innovation, as developing trustworthy AI systems require large datasets for training and validation. Furthermore, the availability of anonymous datasets would increase the adoption of AI-powered tools by supporting health technology assessments and education. Secure, privacy compliant data utilization is key for unlocking the full potential of AI and data analytics. In this project we have been developing a solution that enables analyst to utilize encryption-in-use technologies (secure multi-party computation, fully homomorphic encryption and federated learning) to run analytics and build better machine learning models by accessing more data. We have been working on advancing the current state-of-the-art data synthesis methods towards a more generalized approach of synthetic data generation, and also developing metrics for testing and validation, as well as protocols that enable synthetic data generation without access to real-world data (through multi-party computation). These have been put together as a combined effort from 20 partners from 10 European countries and funded by the European Commission under the Horizon Europe Programme.

The workshop will introduce the audience to the project and its approaches to achieving a next-generation healthcare ecosystem in Europe through secure, privacy-preserving AI models as a service and synthetic healthcare data as a service.

Speaker 1:  11:00 – 11:05 Mufti Mahmud (King Fahd University of Petroleum and Minerals, Saudi Arabia) - Introduction to the ‘Privacy Compliant Health Data As A Service For AI Development’ Technologies Session 1

 

Speaker 2: 11:05 – 11:20 Antti Airola (Assoc. Prof., University of Turku, Finland) - Introduction to the PHASE IV AI project

 

 

Speaker 3: 11:20 – 11:40 Erkay Savas (Sabancı University, Türkiye) - Federated Learning over Encrypted Data

 

 

Speaker 4: 11:40 – 12:00 Artur Rocha (INESC TEC, Portugal) - Data privacy methods and tools

 

 

Speaker 5: 12:00 – 12:20 Mariya Georgieva (Tune Insight, Switzerland) - Balancing Data Privacy and Utility: Introduction to Privacy-Enhancing Technologies (PETs)

 

Speaker 6: 14:00 – 14:20 13:30 – 13:40 Antti Airola (Assoc. Prof., University of Turku, Finland) - Summarisation of the Morning Session and Introduction to the afternoon session

 

Speaker 7: 13:40 – 14:00 Irfan Khan (Turku University of Applied Science, Finland) - Synthetic healthcare data generation

 

 

Speaker 8: 14:00 – 14:20 Tunc Asuroglu (VTT, Finland) - Synthetic data, Data quality measures

 

 

Speaker 9: 14:20 – 14:40 Ibrahim Sabra (University of Vienna, Austria) - AI-generated Synthetic Data: Legal Standing and Ethical Implications

 

Speaker 10: 15:30 – 15:55 David Brown (Nottingham Trent University, UK) - Prediction of people at high risk of lung cancer from EHR

 

Speaker 11: 15:55 – 16:20 Hélder Oliveira (INESC TEC, FCUP, Portugal) - Accurate image-based lung Cancer Characterization Using Machine Learning

 

Speaker 12: 16:20 – 16:45 Christos Chatzichristos (Post-doctoral researcher, KU Leuven, Belgium) - AI-based prediction of lymph node dissection