Northern Lights Deep Learning Winter School 2025

Northern Lights Deep Learning Winter School 2025
Reference Code EUG2_T3_1_0047
Host Institution UiT - UiT – The Arctic University of Norway
Description

The NLDL Winter School consists of tutorials by experts in the field and is co-hosted by Norwegian Artificial Intelligence Research Consortium (NORA) as part of the NORA Research School.  See more on the NLDL winterschool's website.

NOTE: There are a total of 16 available mobility scholarships for EUGLOH students outside of UiT. UiT students should apply through the NLDL winterschool's website directly. EUGLOH rates for travel, accommodation and sustenance are covered for successful applicants. To ensure balanced participants across EUGLOH partners, no more than 4 participant from any one partner institution will be selected.

Period 6 Jan 2025 — 10 Jan 2025
Duration Up to 1 week in length
Mode Physical
Type of activity Summer school
Target groups PhD students
Location Tromso, Norway
WP WP 3
ISCED Fields of Study
Contact Person Bror-Magnus Strand, Puneet Sharma
bror-magnus.s.strand@uit.no, puneet.sharma@uit.no
Content and Methodology

The NLDL Winter School will consist of tutorials from leading experts in the machine learning field. Below we present each tutorial, followed by the tutorial speakers. 



Tutorial 1: Aleatoric and Epistemic Uncertainty in Statistics and Machine Learning (William Waegeman – Ghent University)



Without any doubt, the notion of uncertainty is of major importance in machine learning and constitutes a key element of modern machine learning methodology. In recent years, it has gained in importance due to the increasing relevance of machine learning for practical applications, many of which are coming with safety requirements. In this regard, new problems and challenges have been identified by machine learning scholars, many of which call for novel methodological developments. Indeed, while uncertainty has a long tradition in statistics, and a broad range of useful concepts for representing and quantifying uncertainty have been developed on the basis of probability theory, recent research has gone beyond traditional approaches and also leverages more general formalism and uncertainty calculi.


This tutorial aims to provide an overview of uncertainty quantification in machine learning, a topic that has received increasing attention in the recent past. Starting with a recapitulation of classical statistical concepts, we specifically focus on novel approaches for distinguishing and representing so-called aleatoric and epistemic uncertainty. By the end of the tutorial, attendees will have a comprehensive understanding of the fundamental concepts and recent advances in this field.



Tutorial 2: Responsible and Explainable Artificial Intelligence (Virginia Dignum – Umeå University)



This tutorial presents ongoing research in the field of Responsible Artificial Intelligence (RAI) by introducing core concepts and means of operationalising AI ethics. Attendees will—through both lectures and problem-based learning exercises—get experience in implementing and testing systems for policy compliance.


Here, we introduce the fundamental aspects of RAI by providing a holistic multidisciplinary view. The course structure is such as to introduce to the attendees to the impact intelligent and autonomous systems have on societies and individuals and ongoing state-of-the-art discussions related to ethical, legal, and social aspects of AI. This introduction will be followed by a critical discussion of where accountability and responsibility lie for ethical, legal, and social impacts of these systems, considering decision points throughout the development and deployment pipeline. With this knowledge in mind, students will be introduced to socio-technical approaches the governance, monitoring and control of intelligent systems as tools for incorporating constraints into intelligent system design. Finally, participants apply these skills on a simulated responsible design problem.


 


Tutorial 3: Large Language Models Under the Hood (Andrey Kutuzov, Egil Rønningstad, David Samuel – University of Oslo)



Recently, large generative language models (LLMs) have not only become the backbone of natural language processing (NLP), but also entered daily life of those not interested in deep learning. The names of the models like GPT-4, Gemini, Llama, etc, often make headlines. However, under the hood these systems are not some apocalyptic "artificial intelligence": they are still statistical language models based on well-known techniques from machine learning and specifically multi-layer (thus "deep") artificial neural networks. This tutorial will introduce the attendants to the foundations of deep learning and then move on to explaining how these approaches are employed in pre-training  models capable of seemingly human-like conversational capabilities. Most recent research problems associated with generative LLMs will also be briefly presented.


The tutorial will include a hands-on session where the participants will be given access to remote computing nodes and will directly interact with open-source large language models for Norwegian (of NORA.LLM family https://huggingface.co/norallm) using Python. By the end of the  tutorial, the students will have a solid understanding of the basics of modern generative language models, and will acquire practical experience of loading, using and evaluating LLMs locally (as opposed to querying black-box API endpoints like ChatGPT). For the hands-on part, at least basic knowledge of Python is a prerequisite, although there will also be a possibility to form teams including students with varying levels of programming skills.

Recognition Certificate of participation
Language English
Funding by EUGLOH budget Funded in part
Recruitment of Participants Qualitative Assessment
Number of open spots 16
Call for Applications
Closed
Last call
28 Oct 2024 — 6 Nov 2024