Lectures

Lectures


Lecture 1/3 “Just a stochastic parrot? How LLMs Learn, Reason, and Self-Improve”

Video of the ACDL 2025 Lecture “Just a stochastic parrot? How LLMs Learn, Reason, and Self-Improve”  Prof. Sanjeev Arora, Princeton University & Institute for Advanced Study – IAS, USA.

 

Lecture 2/3 “A Skill-based view of LLM capabilities and their emergence”

Video of the ACDL 2025 Lecture “A Skill-based view of LLM capabilities and their emergence“, Prof. Sanjeev Arora, Princeton University & Institute for Advanced Study – IAS, USA.

Lecture 3/3 ”LLM Metacognition: Eliciting and Leveraging LLMs’ “Thinking about Thinking” ”

Video of the ACDL 2025 Lecture “LLM Metacognition: Eliciting and Leveraging LLMs’ “Thinking about Thinking” “, Prof. Sanjeev Arora, Princeton University & Institute for Advanced Study – IAS, USA.

 



Lecture 1/3 “Diffusion models: Intuition and Perspectives”

Abstract TBA

Lecture 2/3 “Diffusion models: Guidance, Distillation and Advanced Topics”

Abstract TBA

Lecture 3/3 “How to Train Neural Nets Effectively”

Abstract TBA



Lecture 1/3 “Fine-Tuning Language Models”

Abstract TBA

Tentatively, my lectures will be on the post-training of large language models.

Lecture 2/3 “Reinforcement Learning for Language Models”

Abstract TBA

Lecture 3/3 “Applications: Alignment for Safety and Reasoning for Scientific Discovery”

Abstract TBA



Lecture 1/3 “Large Language Models I (Architecture)”

Abstract TBA

Lecture 2/3 “Large Language Models II (Systems)”

Abstract TBA

 

Lecture 3/3 “Large Language Models III (Scaling, Post-Training)”

Abstract TBA



Lecture 1/3 “Reasoning and Planning Abilities of the Large Language Models”

Abstract TBA

Lecture 2/3 “Reasoning and Planning Abilities of the Large Language Models”

Abstract TBA

Lecture 3/3 “Reasoning and Planning Abilities of the Large Language Models”

Abstract TBA



Lecture 1/3 “Multi-Agent Deep Reinforcement Learning”

Abstract TBA

Lecture 2/3 “Cooperative AI”

Abstract TBA

Lecture 3/3 “Generative Agents”

Abstract TBA



Lecture 1/5 “Scalable Post-Training Optimization for Large Language Models”

Abstract TBA

Lecture 2/5 “Scaling Strategic Reasoning for Large Language Models”

Abstract TBA

Lecture 3/5 “The Science of Evaluation for Large Language Models”

Abstract TBA

Lecture 4/5 “Accelerating Transformer Language Models on GPUs”

Abstract TBA

Lecture 5/5 “Real-time Simultaneous Translation of Unbounded Streaming Speech”

Abstract TBA



Lecture “Introduction to Data Analytics for Networks – a Historical Perspective and Major Advances”

Abstract TBA



Lecture 1/3 “Learning on Graphs: The Essentials”

Abstract TBA

Lecture 2/3 “Challenges of using Graph Neural Networks”

Abstract TBA

Lecture 3/3 “Graph Reasoning with Large Language Models”

Abstract TBA



Lecture: “Enterprise AI in Practice: Challenges, Requirements, and Use Cases”

Abstract TBA



Lecture 1/2 “On some Challenges of Embeddings Theory”

Abstract: Embeddings are a ubiquitous topic, which is now being used in various areas of the artificial intelligence studies, from knowledge graphs to LLMs. Specifically, an active area of research in computer science is the theory of manifold learning and finding lower-dimensional manifold representation on how we can learn geometry from data for providing better quality curated datasets. Yet for this usually we need to accept the set assumptions on the geometry of the feature space.
In this talk we will specifically be interested in speaking about the main challenges of the embedding theory, as well as talking about some foundations which help to foster its explainability and interpretability aspects. For this we will cover several interrelated aspects of finite metric spaces theory, applications of embedding theory to knowledge graphs, as well as learning data geometry and data curation for large language models. This work is based on several works:
Singh, LT et al. PlosOne (2023),
Singh, LT et al. EPJ D.S. 13,12 (2024)
LT, Kathuria, Compl.Net. Proc. (2024)

Lecture 2/2 “On some Challenges of Embeddings Theory”

Abstract: Embeddings are a ubiquitous topic, which is now being used in various areas of the artificial intelligence studies, from knowledge graphs to LLMs. Specifically, an active area of research in computer science is the theory of manifold learning and finding lower-dimensional manifold representation on how we can learn geometry from data for providing better quality curated datasets. Yet for this usually we need to accept the set assumptions on the geometry of the feature space.
In this talk we will specifically be interested in speaking about the main challenges of the embedding theory, as well as talking about some foundations which help to foster its explainability and interpretability aspects. For this we will cover several interrelated aspects of finite metric spaces theory, applications of embedding theory to knowledge graphs, as well as learning data geometry and data curation for large language models. This work is based on several works:
Singh, LT et al. PlosOne (2023),
Singh, LT et al. EPJ D.S. 13,12 (2024)
LT, Kathuria, Compl.Net. Proc. (2024)




Invited Speakers

TBA


Tutorial Speakers




Tutorial


Tutorial