Tutorials

No.Tutorial TitleOrganisers
1Exploring Recent Advances in Deep Learning Architectures for Image RecognitionHeyang (Thomas) Li (University of Canterbury)
2Preference-Based Combinatorial OptimizationMalek Mouhoub (University of Regina)
3Quantum Metaheuristics: Applications to Automatic Data ClusteringSiddhartha Bhattacharyya & Jan Platos (VSB Technical University of Ostrava, Czech Republic)
4Tackling Bias in Large Language ModelsVithya Yogarajan & Gillian Dobbie (University of Auckland)
5Exploring User Experience in VR and Immersive Environments Using the 4E/MoBI ApproachFrancisco Parada (Universidad Diego Portales) & Claudio Aguayo (Auckland University of Technology)
6An Integrated Toolbox for Creating Neuromorphic Edge ApplicationsLars Niedermeier (University of California, Irvine (UCI))
7Machine Learning for Streaming DataGuilherme Weigert Cassales, Yibin Sun & Heitor Gomes (University of Waikato)
8Uncertainty Quantification in Neural NetworksAmir H Gandomi & Hassan Gharoun (University of Technology Sydney)
9Collaborative Learning and OptimizationKai Qin (Swinburne University of Technology)
11Introduction to Spiking Neural Networks in Python: Theory, Implementation, and ApplicationsBalkaran Singh, Sugam Budhraja, Zohreh Doborjeh & Edmund Lai (Auckland University of Technology)

Tutorial 2: Preference-Based Combinatorial Optimization

Tutorial Organiser:

Malek Mouhoub (University of Regina)

Abstract:

Combinatorial problems refer to those applications where we look for the existence of a good/best consistent scenario satisfying a set of constraints while optimizing some objectives. The objectives include user's qualitative and quantitative preferences that reflect desires and choices that need to be satisfied as much as possible. Moreover, constraints and objectives might not be explicitly defined and often come with uncertainty due to lack of knowledge, missing information, or variability caused by events which are under nature's control. Finally, in some applications such as timetabling, urban planning and robot motion planning, these constraints and objectives can be temporal, spatial or both. In the latter case, we are dealing with entities occupying a given position in time and space.

In this tutorial, we will show how to overcome the challenges we face when solving a given combinatorial problem under user’s preferences.  The approach that we will adopt is based on the Constraint Satisfaction Problem (CSP) paradigm and its variants. Solving techniques include both exact methods and metaheuristics. Exact methods include the backtracking algorithm and its variants. Constraint propagation and variable/value ordering heuristics are covered, showing how they can be applied to improve the performance of backtracking in practice. Metaheuristics include Stochastic Local Search (SLS) methods and nature-inspired techniques. We will consider cases where constraint problems occur in dynamic environments, as well as situations where some of the relevant information is incomplete/uncertain. We will also review extensions of CSPs to quantitative preferences (soft constraints) and conditional qualitative preferences. Finally, to deal with requirements and desires that are not explicitly defined, we will explore different constraint acquisition and preference learning algorithms.

 

Tutorial 8: Uncertainty Quantification in Neural Networks

Tutorial Organiser:

Amir H Gandomi & Hassan Gharoun (University of Technology Sydney)

Abstract:

Understanding and quantifying uncertainty in neural networks is crucial for developing reliable and robust AI systems. In machine learning, uncertainty arises from various sources, including data noise (aleatoric uncertainty) and model limitations (epistemic uncertainty). Uncertainty quantification involves identifying and measuring these uncertainties to enhance the predictive confidence and decision-making capabilities of neural networks. A prevalent misunderstanding is that the probability values output by neural networks, typically normalized using the Softmax function, accurately measure model confidence. These values might appear as class probabilities but often do not reflect the model's true certainty. This tutorial will provide an in-depth definition of uncertainty quantification, discuss its importance, and explore methods such as Bayesian neural networks, Monte Carlo dropout, and ensemble techniques. This tutorial will cover the theoretical foundations, practical implementations, and applications of these methods, highlighting their significance in improving model reliability, and performance. Participants will learn how to incorporate uncertainty estimates into neural networks and utilize them for uncertainty-aware decision-making processes. By the end of the tutorial, attendees will gain a comprehensive understanding of current techniques and practical insights to implement uncertainty quantification in their neural network projects.

 

Tutorial 9: Collaborative Learning and Optimization

Tutorial Organiser:

Kai Qin (Swinburne University of Technology)

Abstract:

Machine learning (ML) and optimization are two essential missions that Computational Intelligence (CI) aims to address. Accordingly, many CI-based ML and optimization techniques have been proposed, where deep neural networks (used for ML) and evolutionary algorithms (used for optimization) are the most well-known representatives. Intrinsically, CI-based ML and optimization are closely related. On the one hand, CI-based ML consists of various model-centric or data-centric optimization tasks. On the other hand, CI-based optimization is often formulated into ML-assisted search problems. In recent years, there emerges a new research frontline in CI, namely Collaborative Learning and Optimization (COLO), which studies how to synergize CI-based ML and optimization techniques while unleashing the unprecedented computing power (e.g., via supercomputers) to generate more powerful ML and optimization techniques for solving challenging problems.

This tutorial aims at introducing this newly emerging research direction. Specifically, we will first introduce CI, CI-based ML and optimization techniques, and their relationships, and then describe COLO from three aspects, i.e., how to make use of ML techniques to assist optimization (Learn4Opt), how to leverage optimization techniques to facilitate ML (Opt4Learn), and how to synergize ML and optimization techniques to deal with real-world problems which involve ML and optimization as two indispensable and interwoven tasks (LearnOpt), where the most representative research hotspot in each of these three aspects, i.e., automated construction of deep neural networks, data-driven evolution optimization, and predictive optimization will be discussed in detail.

The organizer is the co-founder of the research direction of COLO and has given talks about the similar topic, as tutorials, invited talks, keynotes, in various international forums in the past, such as PRICAI 2021, 2022 IEEE CIS Summer School on Deep Learning and Computational Intelligence: Theory and Applications, IJCNN 2023, and IJCNN 2024.

 

Tutorial 4: Tackling Bias in Large Language Models

Tutorial Organiser:

Vithya Yogarajan & Gillian Dobbie (University of Auckland)

Abstract:

Large language models (LLMs) are powerful decision-making tools widely adopted in healthcare, finance, and transportation. Embracing the opportunities and innovations of LLMs is inevitable. However, LLMs inherit stereotypes, misrepresentations, discrimination, and societies’ biases from various sources resulting in concerns about equality, diversity, and fairness.

The tutorial provides an overview of bias in LLMs—what it is, how it is detected and measured, and methods for mitigating bias. It incorporates real-world examples from New Zealand, where Māori are the indigenous population and underrepresented. After describing bias and its sources in the LLM development pipelines, the tutorial delves into current methods for detecting bias and the evaluation metrics recently introduced for bias measurement. It covers the state of the art in mitigating bias in LLMs. Since the area is in its infancy, the tutorial concludes with many open research questions. The examples provide participants the opportunity to delve into the methods that are introduced through hands-on exercises.

 

Tutorial 3: Quantum Metaheuristics: Applications to Automatic Data Clustering

Tutorial Organiser:

Siddhartha Bhattacharyya & Jan Platos (VSB Technical University of Ostrava, Czech Republic)

Abstract:

Cluster analysis is a popular technique aiming to segregate a set of data points into groups called clusters, where the number of clusters is predefined. This always requires a priori information about the number of clusters, which is not always available in most real-time applications. Hence, traditional clustering techniques suffer from inappropriate choices regarding the number of clusters. This can be alleviated if the number of clusters in a dataset can be determined automatically without recourse to any a priori information.

Automatic determination of the optimum number of clusters from a dataset is challenging in the computer vision community. The automatic determination process entails optimizing a specific dataset's optimal number of clusters. Different metaheuristics are widely used for solving these complex optimization problems. However, the conventional metaheuristics are time-complex and unsuitable for real-time applications.

Lately, with the advent of the quantum computing paradigm, scientists have embarked on evolving quantum metaheuristics, which have been found to be suited for real-time applications due to their higher convergence speed.

In this tutorial, quantum computing principles and metaheuristic techniques are explored to design Quantum Metaheuristics, which can be applied to compute an optimum number of clusters in a dataset in real time to obviate human intervention.

 

Tutorial 5: Exploring User Experience in VR and Immersive Environments Using the 4E/MoBI Approach

Tutorial Organiser:

Francisco Parada (Universidad Diego Portales) & Claudio Aguayo (Auckland University of Technology)

Abstract:

This tutorial at the 31st International Conference on Neural Information Processing (ICONIP 2024) aims to explore new avenues in understanding user experience (UX) in virtual and immersive (Mixed and Extended Reality, i.e., XR) environments through the 4E/MoBI approach. Participants will delve into the integration of neural information processing within virtual reality (VR) and immersive scenarios. The 4E/MoBI method utilises mobile EEG data collection devices allowing for on-site recording, framed within a 4E cognition approach. This session is tailored for scientists, researchers, and practitioners interested in enhancing user experience design through multimodal data analytics and computational neuroscience. The primary objective is to provide attendees with a comprehensive understanding of how mobile brain/body imaging (MoBI)  technologies (e.g., Electroencephalography (EEG), Electrocardiography (ECG), Oculography), implemented under the Embodied, Extended, Embedded, and Enacted (4E) Cognition perspective, can be employed to gather and analyse UX and neurobehavioral data in diverse VR and immersive environments. This will be achieved through theoretical discussions, live demonstrations, and hands-on practice, focusing on multimodal methodologies. The tutorial will enhance participants' skills in understanding of using MoBI technologies, multimodal data collection and analysis, and application of findings to improve immersive and virtual experiences design and practice across disciplines. The tutorial aligns with ICONIP 2024's focus on neural information processing theory and applications, particularly within human-centred computing and cognitive neuroscience. Attendees will gain valuable insights into cutting-edge neuroimaging techniques and tools, empowering them to apply innovative methodologies in their own research and practice. By the end of the session, participants will be equipped with practical knowledge to advance the field of immersive and virtual environments through enhanced user experience design and neural data integration.

 

Tutorial 6: An Integrated Toolbox for Creating Neuromorphic Edge Applications

Tutorial Organiser:

Lars Niedermeier (University of California, Irvine  (UCI))

Abstract:

Spiking Neural Networks (SNNs) and neuromorphic models are more efficient and have more biologically realism than the activation functions typically used in deep neural networks, transformer models and generative AI. SNNs have local learning rules, can learn on small data sets, and adaptive by neuromodulation. However, although the research has discovered their advantages, there are still few compelling practical applications, especially at the edge where sensors and actuators need to be processed in a timely fashion. One reason for this might be that SNNs are much more challenging to understand, build, and operate due to their intrinsic properties. For instance, the mathematical foundation involves differential equations rather than basic activation functions of the neurons. To address these challenges, we have developed CARLsim++. CARLsim++ can lead to rapid development of neuromorphic applications for simulation or edge processing. It is an integrated toolbox that enables fast and easy creation of neuromorphic applications. It encapsulates the mathematical intrinsics and low-level C++ programming by providing a graphical user interface for users who do not have a background in software engineering but still want to create neuromorphic models. In this tutorial, we will demonstrate how one can easily configure inputs and outputs to a physical robot using CARLsim++.

Tutorial 7: Machine Learning for Streaming Data

Tutorial Organiser:

Guilherme Weigert Cassales & Yibin Sun (University of Waikato), Heitor Gomes (Victoria University of Wellington

Abstract:

Machine Learning for Data Streams (MLDS) has been an important area of research since the late 1990s, and its usage in industry has grown significantly over the last few years. However, there is still a gap between cutting-edge research and the readily available tools, which makes it challenging for practitioners, including experienced data scientists, to implement and evaluate these methods in this highly complex domain.

Our tutorial aims to bridge this gap with a dual focus. We discuss research topics, such as concept drift and anomaly detection for streams, while providing practical demonstrations of their implementation and assessment using Python. By catering to both researchers and practitioners, the tutorial aims to empower them to design and conduct experiments and extend existing methodologies.

 

Tutorial 11: Introduction to Spiking Neural Networks in Python: Theory, Implementation, and Applications

Tutorial Organiser:

Balkaran Singh, Sugam Budhraja, Zohreh Doborjeh & Edmund Lai (Auckland University of Technology)

Abstract:

Spiking Neural Networks (SNNs) represent the third generation of artificial neural networks, incorporating time as a critical element of computation, closely mimicking the functioning of biological neural networks. This tutorial will provide a comprehensive introduction to SNNs, covering the fundamental concepts, architectures, learning mechanisms, and their advantages over traditional neural networks. We will guide participants through Python implementations, focusing on optimizing performance using GPUs, and introduce the NeucubePy library for SNN simulations. Part two of the tutorial will feature practical demonstrations and case studies, including the applications on multimodal LYRIKS data, showcasing the versatility and potential of SNNs in handling complex, multimodal datasets.

 

Tutorial 1: Exploring Recent Advances in Deep Learning Architectures for Image Recognition

Tutorial Organiser:

Heyang (Thomas) Li (University of Canterbury)

Abstract:
This tutorial aims to provide an in-depth exploration of recent advances in deep learning architectures for image recognition. We will cover state-of-the-art techniques, including convolutional neural networks (CNNs), recurrent neural networks (RNNs), and transformer-based models, highlighting their applications and performance in various domains. Attendees will gain practical insights into implementing and fine-tuning these architectures for real-world image recognition tasks.