Lectures

Lectures


Lecture 1/3 “Just a stochastic parrot? How LLMs Learn, Reason, and Self-Improve”

Abstract TBA

Lecture 2/3 “A Skill-based view of LLM capabilities and their emergence”

Abstract TBA

Lecture 3/3 ”LLM Metacognition: Eliciting and Leveraging LLMs’ “Thinking about Thinking” ”

Abstract TBA



Lecture 1/3 “Diffusion models: Intuition and Perspectives”

Abstract TBA

Lecture 2/3 “Diffusion models: Guidance, Distillation and Advanced Topics”

Abstract TBA

Lecture 3/3 “How to Train Neural Nets Effectively”

Abstract TBA



Lecture 1/3 “Fine-Tuning Language Models”

Abstract TBA

Tentatively, my lectures will be on the post-training of large language models.

Lecture 2/3 “Reinforcement Learning for Language Models”

Abstract TBA

Lecture 3/3 “Applications: Alignment for Safety and Reasoning for Scientific Discovery”

Abstract TBA



Lecture 1/3 “Transformer Architectures”

Abstract TBA

Lecture 2/3 “Language Models Pre-Training and Scaling – I”

Abstract TBA

Lecture 3/3 “Language Models Pre-Training and Scaling – II”

Abstract TBA



Lecture 1/3

Abstract TBA

Lecture 2/3

Abstract TBA

Lecture 3/3

Abstract TBA



Lecture 1/3 “Reasoning and Planning Abilities of the Large Language Models”

Abstract TBA

Lecture 2/3 “Reasoning and Planning Abilities of the Large Language Models”

Abstract TBA

Lecture 3/3 “Reasoning and Planning Abilities of the Large Language Models”

Abstract TBA



Lecture 1/3 “Multi-Agent Deep Reinforcement Learning”

Abstract TBA

Lecture 2/3 “Cooperative AI”

Abstract TBA

Lecture 3/3 “Generative Agents”

Abstract TBA



Lecture 1/4

Abstract TBA

Selected topics in LLM systems: accelerating Training & Inference on GPUs and reasoning.
Lecture 2/4

Abstract TBA

Lecture 3/4

Abstract TBA

Lecture 4/4

Abstract TBA



Lecture

Abstract TBA



Lecture 1/3 “Learning on Graphs: The Essentials”

Abstract TBA

Lecture 2/3 “Challenges of using Graph Neural Networks”

Abstract TBA

Lecture 3/3 “Graph Reasoning with Large Language Models”

Abstract TBA



Lecture

Abstract TBA



Lecture “On some Challenges of Embeddings Theory”

Abstract: Embeddings are a ubiquitous topic, which is now being used in various areas of the artificial intelligence studies, from knowledge graphs to LLMs. Specifically, an active area of research in computer science is the theory of manifold learning and finding lower-dimensional manifold representation on how we can learn geometry from data for providing better quality curated datasets. Yet for this usually we need to accept the set assumptions on the geometry of the feature space.
In this talk we will specifically be interested in speaking about the main challenges of the embedding theory, as well as talking about some foundations which help to foster its explainability and interpretability aspects. For this we will cover several interrelated aspects of finite metric spaces theory, applications of embedding theory to knowledge graphs, as well as learning data geometry and data curation for large language models. This work is based on several works:
Singh, LT et al. PlosOne (2023),
Singh, LT et al. EPJ D.S. 13,12 (2024)
LT, Kathuria, Compl.Net. Proc. (2024)




 

Tutorials


(TBA)