Context Call for papers Program Speakers Panelists Organizers Area Chairs Contact Return To Top

Context

Graphs serve as a powerful representational framework for machine learning, and their integration has substantially advanced the field. Indeed, extensive studies have pushed forward graph machine learning (GML) in both theory and applications. Recently, new perspectives have been emerging in the machine learning community, including algebraic–topological analyses, foundation models, generative models, and large models in applications. Leveraging these ideas for core graph machine learning holds a lot of promise, including the dual benefit of deeper theoretical insight, new capabilities and more powerful, application-aligned algorithms and models. The aim of this workshop is to explore and connect these new perspectives on GML, and to identify overarching challenges and tools – in terms of theory, methodology, and modeling.

Call for papers

This workshop will receive submissions, talks, and poster sessions on a wide range of topics, perspectives, and ideas including but not limited to:

  • Symmetry, Equivariance, and Group-Theoretic Graph Models
  • Continuous and Differential Geometric Models
  • Topological Machine Learning
  • Graph Diffusion Models and Graph Generative Models
  • Graph foundational models and Graph Augmented LLMs
  • Continuous‑Limit and Infinite‑Width Analysis of Graph Machine Learning
  • Transferability and Generalization Properties of Graph Models
  • Graphs for Science and Graph-Based Simulations
  • Novel Graph Machine Learning Architectures
  • Causality and Directed Acyclic Graph learning
  • Self-supervised and Semi-supervised Graph Machine Learning
  • Quantum Graph Machine Learning

The main text of a submitted paper is limited to 6 content pages, including all figures and tables. We encourage submissions to have between 4 and 6 pages. Additional pages containing references, checklist, and the optional technical appendices do not count as content pages. All submission are double blind, and must use the NeurIPS 2025 author kit available here. The review process will be facilitated via OpenReview. Please make sure every author has an OpenReview account ahead of submission.

As per the NeurIPS workshop guidelines, this workshop is not a venue for work that has been previously published in other conferences on machine learning or related fields.

Accepted papers will be accessible via this website ahead of the workshop. Our workshop is non-archival and there are no formal proceedings.

The submission portal can be found here.

Dates and Deadlines

Date
September 2nd, 2025, 11:59 pm AoE Submission deadline
September 22nd, 2025, 11:59 pm AoE Accept/Reject Notification Date
December 6 or December 7, 2025 Workshop Date

Program

Time Session
08:45–09:00 Opening Remarks: Goals, code of conduct, logistics
09:00–09:45 Keynote 1
09:45–10:15 Contributed Talks: 10 min talk + Q&A
10:15–10:45 Coffee and Posters A
10:45–11:30 Keynote 2
12:00–13:30 Lunch
13:30–14:15 Keynote 3
14:15–15:00 Contributed Talks
15:00–15:30 Coffee and Posters B
15:30–16:15 Keynote 4
16:15–16:45 Panel Discussion: “What’s next for GML?”
16:15–17:00 Keynote 5
17:00–17:45 Mentorship Speed‑Networking
17:45–18:00 Closing Remarks: Awards, next steps, resources

Speakers

Panelists

Organizers

Area Chairs

Name Institution
Michael A. Perlmutter Assistant Professor at Boise State University
Venkata S.S. Gandikota Assistant Professor at Syracuse University
Gonzalo Mateos Professor at University of Rochester
Ellen Vitercik Assistant Professor at Stanford University
Kaixiong Zhou Assistant Professor at North Carolina State University
Jundong Li Assistant Professor at University of Virginia
Michael Galkin Senior Research Scientist at Google Research
Jhony Giraldo Assistant Professor at Institut Polytechnique de Paris
Alex Tong Assistant Professor at Duke University
Yuning You Postdoc at California Institute of Technology
Siheng Chen Associate Professor at Shanghai Jiao Tong University
Melanie Weber Assistant Professor at Harvard University