Lecturers

Lecturers

Each Lecturer will hold up to four lectures on one or more research topics.


Topics

Developing mathematical and conceptual understanding to enable better and safer AI, theoretical machine learning.

Biography

Research Interest: Developing mathematical and conceptual understanding to enable better and safer AI.
Charles C. Fitzmorris Professor of Computer Science
Director, Princeton Language and Intelligence
PhD, UC Berkeley 1994

National Academy of Sciences.

Google Scholar page.

Wikipedia.

Sanjeev Arora works on theoretical computer science and theoretical machine learning. He received a bachelor’s degree in Mathematics with Computer Science from MIT in 1990 and a PhD in Computer Science from Berkeley in 1994. His dissertation on the PCP Theorem was a co-winner of the ACM Doctoral Dissertation prize. In 1994 he joined Princeton University as a faculty member, where he is now Charles C. Fitzmorris Professor of Computer Science. He is currently also Visiting Professor in Mathematics at the Institute for Advanced Study. He has received the Packard Fellowship (1997), Simons Investigator Award (2012), Gödel Prize (2001 and 2010), ACM Prize in Computing (formerly the ACM-Infosys Foundation Award in the Computing Sciences) (2012), and the Fulkerson Prize in Discrete Math (2012). He is a fellow of the American Academy of Arts and Sciences and member of the National Academy of Science.

Lectures



Antoine Bosselut
   

Topics

Natural Language Processing, Artificial Intelligence, Machine Learning, Commonsense, Representation and Reasoning

Biography

I am an assistant professor in the School of Computer and Communication Sciences at EPFL. I lead the EPFL NLP group where we conduct research on natural language processing (NLP) systems that can model, represent, and reason about human and world knowledge.

Previously, I was a postdoctoral researcher at Stanford University in the SNAP and NLP groups working with Jure Leskovec and Chris Manning. I completed a PhD in CS at the University of Washington, where I worked with Yejin Choi, and a BEng in EE at McGill University.

Google Scholar page.

Lectures



Sander Dieleman
Google DeepMind London, UK
   

Topics

Machine Learning, Deep Learning, Generative Models, Music Information Retrieval, Representation Learning.

Biography

I’m a research scientist at Google DeepMind in London, UK, where I work chiefly on generative modelling at scale. Before that, I was a PhD student at Ghent University in Belgium.

My PhD research topic was learning hierarchical representations of musical audio signals for classification and recommendation, with a focus on deep learning and feature learning. More information can be found on the Research page. Since then I have worked on audio generation (e.g. WaveNet), AlphaGo, representation learning, and generative modelling with autoregressive models, GANs and diffusion models, including Lyria, Imagen 2, Imagen 3 and Veo.

During my PhD, I was also quite active on Kaggle. A few write-ups about my experience with Kaggle competitions are available on this blog. I also wrote a paper about my winning solution for an astronomy competition.

I was one of the lead developers of the Theano-based neural network library Lasagne.

Google Scholar.

Lectures



Caglar Gulcehre
   

Topics

Machine Learning, Deep Learning, Reinforcement Learning, Cognitive Science, Artificial Intelligence.

Biography

Professor and Lead of CLAIRE lab @ EPFL

Research Consultant at Google DeepMind

Ex: Staff Research Scientist @ Google DeepMind

I am currently a professor at EPFL and leading the CLAIRE research lab. I was a staff research scientist in Google DeepMind working on the intersection of Reinforcement Learning, Foundation Models, Novel Archtiectures, safety + Alignment and Natural Language Understanding. I have led or co-led several projects during my time at DeepMind ranging from next generation of sequence modeling architectures, alignment and safety to offline RL.

​​I am interested in building agents that can learn from a feedback signal (often weak, sparse, and noisy in the real world) while utilizing unlabeled data available in the environment. I am interested in improving our understanding of the existing algorithms and developing new ones to enable real-world applications with positive social impact. I am particularly fascinated by the scientific applications of machine learning algorithms. I enjoy working in a multi/cross-disciplinary team and am often inspired by neuroscience, biology, and cognitive sciences when working on algorithmic solutions.  

I finished my Ph.D. under the supervision of Yoshua Bengio at MILA.

I defended my thesis “Learning and time: on using memory and curricula for language understanding” in 2018 with Christopher Manning as my external examiner. Currently, the research topics that I am working on include but are not limited to reinforcement learning, offline RL, large-scale deep architectures (or foundational models. as they call it these days), and representation learning (including self-supervised learning, new architectures, causal representations, etc.) I have served as an area chair and reviewer to significant machine learning conferences such as ICML, NeurIPS, ICLR, and journals like Nature and JMLR. I  have published at numerous influential conferences and journals such as Nature, JMLR, NeurIPS, ICML, ICLR, ACL, EMNLP, etc… My work has received the best paper award at the Nonconvex Optimization workshop at NeurIPS and an honorable mention for best paper at ICML 2019. ​I have co-organized the Science and Engineering of Deep Learning workshops and three other workshops at NeurIPS, ICML, and ICLR.

Google Scholar.

Lectures



Tatsu Hashimoto
Institute for Human-Centered Artificial Intelligence (HAI)
 

Topics

LLMs, NLP, Machine Learning.

Biography

Currently an assistant professor at the computer science department in Stanford university.

My research uses tools from statistics to make machine learning systems more robust and trustworthy — especially in complex systems such as large language models. The goal of my research is to use robustness and worst-case performance as a lens to understand and make progress on several fundamental challenges in machine learning and natural language processing. A few topics of recent interest are,

Long-tail behavior: How can we ensure that a machine learning system won’t fail catastrophically in the wild under changing conditions?
Understanding: A system which understands how to answer questions or generate text should also do so robustly out-of-domain.
Fairness: Machine learning systems which rely on unreliable correlations can result in spurious and harmful predictions.

Previously, I was a post-doc at Stanford working for John C. Duchi and Percy Liang on tradeoffs between the average and worst-case performance of machine learning models. Before my post-doc, I was a graduate student at MIT co-advised by Tommi Jaakkola and David Gifford and a undergraduate student at Harvard in statistics and math advised by Edoardo Airoldi.

Google Scholar.

Lectures



Thomas Hofmann

Topics

Natural Language Understanding, Text Understanding, Machine Learning, Deep Learning.

Biography

Full Professor at the Department of Computer Science

Deputy head of Dep. of Computer Science

Head of Institute for Machine Learning

1997 Ph.D. Computer Science, University of Bonn 1997-1999 Postdoctoral Fellow, MIT and UC Berkeley 1999-2004 Assistant/Associate Professor for Computer Science, Brown University 2001-2015 Co-founder, Chief Scientist, Recommind.com (now OpenText) 2004-2006 Professor for Computer Science, Technical University of Darmstadt 2004-2005 Director, Fraunhofer IPSI 2006-2013 Director of Engineering and co-site Lead, Google Zurich 2014-today Professor for Data Analytics, Department of Computer Science, ETH Zürich 2014-today Co-founder and CTO, 1plusX 2015-today Co-director Max Planck-ETH Center for Learning Systems

Google Scholar.

 

Lectures



Subbarao Kambhampati
   

Topics

Artificial Intelligence, Automated planning, LLM Reasoning, Human-AI Interaction.

Biography

Subbarao Kambhampati is a professor of computer science at Arizona State University. Kambhampati studies fundamental problems in planning and decision making, motivated in particular by the challenges of human-aware AI systems. He is a fellow of Association for the Advancement of Artificial Intelligence, American Association for the Advancement of Science,  and Association for Computing machinery, and was an NSF Young Investigator. He was the president of the Association for the Advancement of Artificial Intelligence, trustee of International Joint Conference on Artificial Intelligence, and a founding board member of Partnership on AI. Kambhampati’s research as well as his views on the progress and societal impacts of AI have been featured in multiple national and international media outlets. He writes a column on the societal and policy implications of the advances in Artificial Intelligence for The Hill.

Former President, AAAI; Fellow, AAAI, AAAS, ACM.

Google Scholar.

Lectures



Joel Z. Leibo
Google DeepMind London, UK
   

Topics

Cooperation in AI & Neuroscience, Multi-Agent Reinforcement Learning, Machine Learning.

Biography

Joel is a senior staff research scientist at Google DeepMind and visiting professor at King’s College London. He obtained his PhD from MIT where he studied computational neuroscience and machine learning with Tomaso Poggio. Joel is interested in reverse engineering human biological and cultural evolution to inform the development of artificial intelligence that is simultaneously human-like and human-compatible. In particular, Joel believes that theories of cooperation from fields like cultural evolution and institutional economics can be fruitfully applied to inform the development of ethical and effective artificial intelligence technology.

 

Google Scholar.

Lectures



Topics

Machine Learning, Natural Language Processing, Machine Translation, LLM, AI Drug Discovery

Biography

Lei Li is an assistant professor in Language Technology Institute, School of Computer Science at Carnegie Mellon University. His research interest lies in natural language processing, machine learning, and drug discovery. He received his B.S. from Shanghai Jiao Tong University and Ph.D. from Carnegie Mellon University. His dissertation work on fast algorithms for mining co-evolving time series was awarded ACM KDD best dissertation (runner up). His recent work on AI writer Xiaomingbot received 2nd-class award of Wu Wen-tsün AI prize in 2017. He is a recipient of ACL 2021 best paper award, CCF Young Elite award in 2019, and CCF distinguished speaker in 2017. His team won first places for five language translation directions and the best in corpus filtering challenge in WMT 2020. Previously, he worked as a post-doctoral researcher at EECS department of UC Berkeley, a principal researcher at Baidu’s Institute of Deep Learning in Silicon Valley, and at ByteDance as the founding director of AI Lab. He has served as Associate Editor of TPAMI and organizers and area chair/senior PC for multiple conferences including ACL, EMNLP, ICML, ICLR, NeurIPS, KDD, AAAI, IJCAI, WSDM, and CIKM. He has launched ByteDance’s machine translation system (VolcTrans) and Xiaomingbot automatic writing system, and many of his algorithms have been deployed in production (Toutiao, Douyin, Tiktok, Xigua, Feishu/Lark), serving over a billion users. He has delivered five tutorials at ACL 2021, EMNLP 2019, NLPCC 2019, NLPCC 2016, and KDD 2010. He was a lecturer for 2014 Probabilistic Programming for Advancing Machine Learning summer school at Portland, USA.

Google Scholar.

 

 

Lectures



Topics

Data Science, Global Optimization, Mathematical Modeling, Financial Applications, AI

Biography

Panos Pardalos was born in Drosato (Mezilo) Argitheas  in 1954 and graduated from Athens University (Department of Mathematics).  He received  his  PhD  (Computer and Information Sciences) from the University of Minnesota.  He  is a Distinguished Emeritus Professor  in the Department of Industrial and Systems Engineering at the University of Florida, and an affiliated faculty of Biomedical Engineering and Computer Science & Information & Engineering departments.

Panos  Pardalos is a world-renowned leader in Global Optimization, Mathematical Modeling, Energy Systems, Financial applications, and Data Sciences. He is a Fellow of AAAS, AAIA, AIMBE, EUROPT, and INFORMS and was awarded the 2013 Constantin Caratheodory Prize of the International Society of Global Optimization. In addition, Panos  Pardalos has been awarded the 2013 EURO Gold Medal prize bestowed by the Association for European Operational Research Societies. This medal is the preeminent European award given to Operations Research (OR) professionals for “scientific contributions that stand the test of time.”

Panos Pardalos has been awarded a prestigious Humboldt Research Award (2018-2019). The Humboldt Research Award is granted in recognition of a researcher’s entire achievements to date – fundamental discoveries, new theories, insights that have had significant impact on their discipline.

Panos Pardalos is also a Member of several  Academies of Sciences, and he holds several honorary PhD degrees and affiliations. He is the Founding Editor of Optimization Letters, Energy Systems, and Co-Founder of the International Journal of Global Optimization, Computational Management Science, and Springer Nature Operations Research Forum. He has published over 600 journal papers, and edited/authored over 200 books. He is one of the most cited authors and has graduated 71 PhD students so far. Details can be found in www.ise.ufl.edu/pardalos

Panos Pardalos has lectured and given invited keynote addresses worldwide in countries including Austria, Australia, Azerbaijan, Belgium, Brazil,  Canada, Chile, China, Czech Republic, Denmark, Egypt, England, France, Finland, Germany, Greece, Holland,  Hong Kong, Hungary, Iceland, Ireland, Italy, Japan, Lithuania, Mexico, Mongolia, Montenegro, New Zealand, Norway, Peru, Portugal, Russia, South Korea, Singapore, Serbia, South Africa, Spain, Sweden, Switzerland, Taiwan, Turkey, Ukraine, United Arab Emirates, and the USA.

https://scholar.google.com/citations?user=4e_KEdUAAAAJ&hl=en

Lectures



Topics

Graph ML, Graph Neural Networks, Machine Learning, Data Mining.

Biography

I develop techniques for learning expressive representations of social relationships and natural language with neural networks. These scalable algorithms are useful for prediction tasks (classification/regression), pattern discovery, and anomaly detection in large networked data sets. I have 30+ peer-reviewed papers, and my work has been featured at the leading conferences in machine learning (NeurIPS, ICML, ICLR), data mining (KDD), and information retrieval (WWW).

Google Scholar.

Lectures



Raniero Romagnoli
Almawave Spa, Italy

Topics

LLMs, Foundation Models, AI, NLP.

Biography

Raniero Romagnoli is CTO of Almawave, VP of PerVoice and CEO of OBDA Systems. He is an expert in Artificial Intelligence and Natural Language Processing both in the corporate and academic world. He leads the company’s technological strategy by managing research and development and innovation teams. He actively participates in numerous national and international initiatives in the field of AI by collaborating with research centers and academies, he holds advanced courses in Data Science, Machine Learning and AI. He is also co-author of numerous scientific articles and international patents.

Lectures



Liubov Tupikina

Topics

Geometric Deep Learning, Embeddings, Hypergraphs, Graph Theory, Stochastic Processes.

Biography

Liubov is researcher with background in mathematics, theoretical physics. She did her Phd in Humboldt University of Berlin, worked in Universities of France, Germany, the Netherlands, Spain, Uruguay.

At Bell labs Liubov is focusing her work on around several topics: robustness of networks, processes on random networks, time-series analysis and embeddings. She is also interested in studying how collective intelligence helps us to build tools for understanding responsible AI. Using stochastic processes and random networks she also works now in survivability processes theory applied in the context of studying time-series processes. She works on development of embeddings applied to various datasets from innovation in science to evolution of data of users.

Google Scholar.

Lectures