Context Call for papers Program Speakers Panelists Organizers Area Chairs Contact Return To Top

Context

Graphs serve as a powerful representational framework for machine learning, and their integration has substantially advanced the field. Indeed, extensive studies have pushed forward graph machine learning (GML) in both theory and applications. Recently, new perspectives have been emerging in the machine learning community, including algebraic–topological analyses, foundation models, generative models, and large models in applications. Leveraging these ideas for core graph machine learning holds a lot of promise, including the dual benefit of deeper theoretical insight, new capabilities and more powerful, application-aligned algorithms and models. The aim of this workshop is to explore and connect these new perspectives on GML, and to identify overarching challenges and tools – in terms of theory, methodology, and modeling. In particular, the following themes will be important and demand new perspectives:

Towards Unified Theoretical Foundations.

Rapid empirical progress has been made from perspectives including algebraic topology, differential geometry, statistical physics, and causal inference, but they rarely integrate these lenses into a unified understanding. Extending such analysis to modern regimes—transfer, multimodal, self-supervised, and generative GML—could lay the groundwork for a unifying framework that is both provably robust and interpretable at scale. Our invited keynote speakers, Prof. Yusu Wang, Prof. Nina Miolane, and panelist Prof. Soledad Villar have extensive experience to address this challenge.

Domain–Knowledge Alignment.

Foundation‑scale graph models promise unprecedented transferability, yet real‑world impact still hinges on domain‑specific principles. It is essential to address these goals by identifying the classes of domain priors most worth encoding, formalizing universal mechanisms for injecting them, and articulating guidelines for navigating the tension between broad generalization and stringent domain fidelity.

Scalability & Robustness at Real‑World Scale.

Training on billion‑edge, dynamically evolving graphs magnifies issues of stability, transferability, and responsible deployment. Combining new perspectives, including analysis tools and emerging techniques, encourages principled frameworks for theoretical analysis in such regimes.

Goals

The primary goal of this workshop is to explore and integrate new perspectives to propel GML, aiming to enhance foundational understanding and practical effectiveness within the machine learning community and across interdisciplinary research fields. Concretely, we aim to:

Synthesize New Perspectives and Theory.

Establish common language and transferable tools by leveraging continuous‑limit analysis, symmetry principles, topology, causality, and quantum viewpoints to gain insights on advanced GML models.

Inspire Application‑Aware Methods.

Highlight case studies where domain priors—physical laws, conservation constraints, causal graphs—substantially improve accuracy, robustness, or interpretability, providing guidance in exploring broader application fields.

Platform for Upcoming Scientists.

Empower new researchers by creating an environment that enhances their visibility, and collaboration network. When assigning questions during the talks and during poster sessions, we will make sure that young scientist voices are heard. The beneficial effects of this approach will be measured by the impact of the young scientists in the middle future.

Foster an Interdisciplinary Community.

Create mentorship and communication opportunities for theorists, practitioners, and domain scientists, thereby broadening the intellectual domains of GML. By bringing together key players in different stages of their career, we aim to facilitate discussions on combining innovative or fundamental techniques with graph structures, evaluating their intuitions, strengths, and inherent limitations.

The anticipated outcome includes: (i) a prioritized agenda of open research questions, (ii) cross-disciplinary collaborations seeded during the event, (iii) a special focus on open and diverse participation and (iv) concrete pathways toward robust, transparent, and domain-aware GML systems.

Call for papers

This workshop will receive submissions, talks, and poster sessions on a wide range of topics, perspectives, and ideas including but not limited to:

  • Symmetry, Equivariance, and Group-Theoretic Graph Models
  • Continuous and Differential Geometric Models
  • Topological Machine Learning
  • Graph Diffusion Models and Graph Generative Models
  • Graph foundational models and Graph Augmented LLMs
  • Continuous‑Limit and Infinite‑Width Analysis of Graph Machine Learning
  • Transferability and Generalization Properties of Graph Models
  • Graphs for Science and Graph-Based Simulations
  • Novel Graph Machine Learning Architectures
  • Causality and Directed Acyclic Graph learning
  • Self-supervised and Semi-supervised Graph Machine Learning
  • Quantum Graph Machine Learning

Dates and Deadlines

Date
TBD Submission deadline
TBD Author notification
TBD Camera ready deadline
TBD Workshop

Submission site: TBD

Program

Time Session
08:45–09:00 Opening Remarks: Goals, code of conduct, logistics
09:00–09:45 Keynote 1
09:45–10:15 Contributed Talks: 10 min talk + Q&A
10:15–10:45 Coffee and Posters A
10:45–11:30 Keynote 2
12:00–13:30 Lunch
13:30–14:15 Keynote 3
14:15–15:00 Contributed Talks
15:00–15:30 Coffee and Posters B
15:30–16:15 Keynote 4
16:15–16:45 Panel Discussion: “What’s next for GML?”
16:15–17:00 Keynote 5
17:00–17:45 Mentorship Speed‑Networking
17:45–18:00 Closing Remarks: Awards, next steps, resources

Speakers

Panelists

Organizers

Area Chairs

Name Institution
Michael A. Perlmutter Assistant Professor at Boise State University
Venkata S.S. Gandikota Assistant Professor at Syracuse University
Gonzalo Mateos Professor at University of Rochester
Ellen Vitercik Assistant Professor at Stanford University
Kaixiong Zhou Assistant Professor at North Carolina State University
Jundong Li Assistant Professor at University of Virginia
Michael Galkin Senior Research Scientist at Google Research
Jhony Giraldo Assistant Professor at Institut Polytechnique de Paris
Alex Tong Assistant Professor at Duke University
Yuning You Postdoc at California Institute of Technology