Graphs serve as a powerful representational framework for machine learning, and their integration has substantially advanced the field. Indeed, extensive studies have pushed forward graph machine learning (GML) in both theory and applications. Recently, new perspectives have been emerging in the machine learning community, including algebraic–topological analyses, foundation models, generative models, and large models in applications. Leveraging these ideas for core graph machine learning holds a lot of promise, including the dual benefit of deeper theoretical insight, new capabilities and more powerful, application-aligned algorithms and models. The aim of this workshop is to explore and connect these new perspectives on GML, and to identify overarching challenges and tools – in terms of theory, methodology, and modeling.
Call for papers
This workshop will receive submissions, talks, and poster sessions on a wide range of topics, perspectives, and ideas including but not limited to:
- Symmetry, Equivariance, and Group-Theoretic Graph Models
- Continuous and Differential Geometric Models
- Topological Machine Learning
- Graph Diffusion Models and Graph Generative Models
- Graph foundational models and Graph Augmented LLMs
- Continuous‑Limit and Infinite‑Width Analysis of Graph Machine Learning
- Transferability and Generalization Properties of Graph Models
- Graphs for Science and Graph-Based Simulations
- Novel Graph Machine Learning Architectures
- Causality and Directed Acyclic Graph learning
- Self-supervised and Semi-supervised Graph Machine Learning
- Quantum Graph Machine Learning
The main text of a submitted paper is limited to 6 content pages, including all figures and tables. We encourage submissions to have between 4 and 6 pages. Additional pages containing references, checklist, and the optional technical appendices do not count as content pages. All submission are double blind, and must use the NeurIPS 2025 author kit available here. The review process will be facilitated via OpenReview. Please make sure every author has an OpenReview account ahead of submission.
As per the NeurIPS workshop guidelines, this workshop is not a venue for work that has been previously published in other conferences on machine learning or related fields.
Accepted papers will be accessible via this website ahead of the workshop. Our workshop is non-archival and there are no formal proceedings.
The submission portal can be found here.
Dates and Deadlines
| Date | |
|---|---|
| December 7, 2025 | Workshop Date |
Program
The work shop will take place in Exhibit Hall F.
| Time | Session | |
|---|---|---|
| 08:00–08:05 | Opening Remarks | Goals, code of conduct, logistics |
| 08:05–08:50 | Keynote | Yusu Wang – Learning generalizable algorithmic procedures via GNNs |
| 08:50–09:00 | Oral | Beyond Sparse Benchmarks: Evaluating GNNs with Realistic Missing Features |
| 09:00–09:45 | Posters | (Smaller than #56) |
| 09:45–09:55 | Oral | Gromov-Wasserstein Graph Coarsening |
| 09:55–10:05 | Oral | Robust Tangent Space Estimation via Laplacian Eigenvector Gradient Orthogonalization |
| 10:10–10:55 | Keynote | Jure Leskovec : Relational Foundation Models and the End of Task-Specific GNNs |
| 10:55–11:05 | Oral | Causal Structure Learning in Hawkes Processes with Complex Latent Confounder Networks |
| 11:05–11:15 | Oral | Of Graphs and Tables: Zero-Shot Node Classification with Tabular Foundation Models |
| 11:15–12:00 | Keynote | Michael Galkin |
| 12:00–13:30 | Lunch | Provided boxed lunch |
| 13:30–14:15 | Keynote | Nina Miolane – Topological Deep Learning |
| 14:15–15:00 | Keynote | Mathias Niepert - Learning (Approximately) Equivariant GNNs for Simulations of Physical Systems |
| 15:00–15:45 | Posters | (Larger than #56) |
| 15:45–15:55 | Oral | G1: Teaching LLMs to Reason on Graphs with Reinforcement Learning |
| 16:00–16:50 | Panel Discussion | |
| 16:50–17:00 | Closing Remarks | Awards, next steps, resources |
Speakers
Jure Leskovec
Professor of Computer Science at Stanford University.
Relational Foundation Models and the End of Task-Specific GNNs
The AI stack is currently unbalanced: we have mastered in-context learning for sequences (text) and grids (images), yet the 'dark matter' of enterprise intelligence—dynamic, heterogeneous structured data—remains stuck in the era of brittle, supervised pipelines. In this talk, I will argue that the future of Graph Machine Learning lies not in better architectures for static benchmarks, but in Relational Foundation Models (RFMs) that serve as universal structural reasoners. I will demonstrate that by abandoning fixed schemas in favor of schema-invariant message passing, RFMs achieve what was previously thought impossible: zero-shot generalization across disjoint graph topologies. We will explore how RFMs utilize structural in-context learning to treat predictive tasks not as model training problems, but as subgraph pattern matching queries within a latent relational space. By decoupling representation learning from specific edge types, RFMs do for structured data what LLMs did for language—transforming the graph from a static asset into a queryable, generative engine. The question is no longer if graph foundation models are integral to the AI stack, but how quickly we can transition from crafting individual GNNs to prompting universal relational substrates.
Yusu Wang
Professor in Halιcιoğlu Data Science Institute at University of California, San Diego.
Learning generalizable algorithmic procedures via GNNs
A central challenge in modern machine learning is learning generalizable procedures that remain effective on unseen, potentially out-of-distribution (OOD) data. Such generalization depends on a complex interplay among model architectures, task structures, data assumptions, and training methodologies. In this talk, I will focus on the interaction between model architecture and task structure in the context of graph learning. We are particularly interested in two questions: Do different graph neural networks learn fundamentally different algorithmic procedures? And can OOD generalization be achieved with only finite samples? To explore these questions, I will present our initial studies using two concrete settings, graph partitioning/clustering and graph shortest-path computation, as testbeds for understanding how graph models internalize and apply algorithmic structure. This talk is based on joint work with M. Black, S. Chen, S. Dasgupta, R. Nerem.
Mikhail Galkin
Senior Research Scientist at Google Research.
Nina Miolane
Assistant Professor of Electrical and Computer Engineering at UC Santa Barbara.
Topological Deep Learning: Unlocking the Higher-Order Structure of Relational Systems
The world is full of complex systems characterized by relations between their components: from social interactions between individuals to electrostatic interactions between atoms. Traditional graph methods capture only pairwise interactions, such as a covalent bond between two atoms. Topological Deep Learning (TDL) goes further by modeling higher-order relations, such as a benzene ring formed by six carbon atoms. TDL enables us to extract knowledge from these richer structures: for instance, assessing whether a protein is a promising drug target. This talk will introduce the core principles of TDL, provide a comprehensive review of its rapidly growing literature, describe open-source and accessible implementations, and raise open theoretical questions for the field. All in all, we will showcase how TDL can effectively capture and reason about the real-world complex systems, while highlighting outstanding challenges and opportunities.
Mathias Niepert
Professor (W3) at the University of Stuttgart and a faculty member of the International Max Planck Research School for Intelligent Systems.
Learning (Approximately) Equivariant GNNs for Simulations of Physical Systems
Physical systems can be simulated through first-principles and numerical methods, but these approaches often become computationally prohibitive at scale. Machine-learning models can complement such methods by providing scalable, data-driven surrogates that incorporate physical structure. Equivariant graph neural networks are particularly effective in this setting because they encode rotational and translational symmetries directly into their architectures, and they have become standard components in probabilistic generative models of atomistic systems. Nevertheless, training these generative models, especially when learning invariant distributions with denoising diffusion or flow matching, can be challenging due to the high variance of standard gradient estimators and the rigidity imposed by strict equivariance constraints. We present a set of methods that address these challenges. First, we introduce a Rao-Blackwellised gradient estimator that interprets symmetry-based data augmentation as a Monte-Carlo approximation of the true gradient and replaces multiple stochastic estimates with a single symmetrized estimator. This yields provably lower-variance gradients and enables more stable and efficient optimization of invariant generative models. The same framework also improves the use of data augmentation typically employed when learning invariant distributions, enhancing the statistical efficiency of diffusion-based training. Finally, we provide a formulation for learning approximately equivariant functions by treating equivariance as a constraint that can be relaxed and adapted in a data-driven manner, allowing models to retain symmetry-aware inductive biases while accommodating the variability present in realistic physical systems.
Panelists
Soledad Villar
Assistant Professor at the Department of Applied Mathematics & Statistics, and Mathematical Institute for Data Science at Johns Hopkins University.
Nayat Sanchez-Pi
Executive Director - CEO Inria Chile Foundation.
Xiaowen Dong
Associate Professor in the Department of Engineering Science at the University of Oxford.
Tom Palczewski
Principal Researcher, SAP SE.
Accepted Papers
Orals
| # | Title | Authors |
|---|---|---|
| 72 | G1: Teaching LLMs to Reason on Graphs with Reinforcement Learning | Xiaojun Guo, Ang Li, Yifei Wang, Stefanie Jegelka, Yisen Wang |
| 87 | Of Graphs and Tables: Zero-Shot Node Classification with Tabular Foundation Models | Adrian Hayler, Xingyue Huang, Ismail Ilkan Ceylan, Michael M. Bronstein, Ben Finkelshtein |
| 93 | Causal Structure Learning in Hawkes Processes with Complex Latent Confounder Networks | Songyao Jin, Biwei Huang |
| 103 | Robust Tangent Space Estimation via Laplacian Eigenvector Gradient Orthogonalization | Dhruv Kohli, Sawyer Jack Robertson, Gal Mishne, Alex Cloninger |
| 123 | Gromov-Wasserstein Graph Coarsening | Carlos A Taveras, Santiago Segarra, Cesar A Uribe |
| 125 | Beyond Sparse Benchmarks: Evaluating GNNs with Realistic Missing Features | Francesco Ferrini, Veronica Lachi, Antonio Longa, Bruno Lepri, Andrea Passerini, Xin Liu, Manfred Jaeger |
Posters
Organizers
Juan Cervino
Postdoctoral Researcher at Massachusetts Institute of Technology.
Stefanie Jegelka
Associate Professor (on leave) at MIT EECS, and a Humboldt Professor at TU Munich.
Charilaos Kanatsoulis
Research Associate in the Department of Computer Science at Stanford University.
Alejandro Ribeiro
Professor of Electrical and Systems Engineering at the University of Pennsylvania.
Luana Ruiz
Assistant Professor, Department of Applied Mathematics and Statistics, Johns Hopkins University
Zhiyang Wang
Postdoc Scholar in Halıcıoğlu Data Science Institute at UCSD.
Area Chairs
| Name | Institution |
|---|---|
| Michael A. Perlmutter | Assistant Professor at Boise State University |
| Venkata S.S. Gandikota | Assistant Professor at Syracuse University |
| Gonzalo Mateos | Professor at University of Rochester |
| Ellen Vitercik | Assistant Professor at Stanford University |
| Kaixiong Zhou | Assistant Professor at North Carolina State University |
| Jundong Li | Assistant Professor at University of Virginia |
| Michael Galkin | Senior Research Scientist at Google Research |
| Jhony Giraldo | Assistant Professor at Institut Polytechnique de Paris |
| Alex Tong | Assistant Professor at Duke University |
| Yuning You | Postdoc at California Institute of Technology |
| Siheng Chen | Associate Professor at Shanghai Jiao Tong University |
| Melanie Weber | Assistant Professor at Harvard University |
Contact
Do not hesitate to write us for any questions:
graphneurips2025@googlegroups.com
This website was built with Hugo and the Hugo Scroll template.