Graphs serve as a powerful representational framework for machine learning, and their integration has substantially advanced the field. Indeed, extensive studies have pushed forward graph machine learning (GML) in both theory and applications. Recently, new perspectives have been emerging in the machine learning community, including algebraic–topological analyses, foundation models, generative models, and large models in applications. Leveraging these ideas for core graph machine learning holds a lot of promise, including the dual benefit of deeper theoretical insight, new capabilities and more powerful, application-aligned algorithms and models. The aim of this workshop is to explore and connect these new perspectives on GML, and to identify overarching challenges and tools – in terms of theory, methodology, and modeling. In particular, the following themes will be important and demand new perspectives:
Towards Unified Theoretical Foundations.
Rapid empirical progress has been made from perspectives including algebraic topology, differential geometry, statistical physics, and causal inference, but they rarely integrate these lenses into a unified understanding. Extending such analysis to modern regimes—transfer, multimodal, self-supervised, and generative GML—could lay the groundwork for a unifying framework that is both provably robust and interpretable at scale. Our invited keynote speakers, Prof. Yusu Wang, Prof. Nina Miolane, and panelist Prof. Soledad Villar have extensive experience to address this challenge.
Domain–Knowledge Alignment.
Foundation‑scale graph models promise unprecedented transferability, yet real‑world impact still hinges on domain‑specific principles. It is essential to address these goals by identifying the classes of domain priors most worth encoding, formalizing universal mechanisms for injecting them, and articulating guidelines for navigating the tension between broad generalization and stringent domain fidelity.
Scalability & Robustness at Real‑World Scale.
Training on billion‑edge, dynamically evolving graphs magnifies issues of stability, transferability, and responsible deployment. Combining new perspectives, including analysis tools and emerging techniques, encourages principled frameworks for theoretical analysis in such regimes.
Goals
The primary goal of this workshop is to explore and integrate new perspectives to propel GML, aiming to enhance foundational understanding and practical effectiveness within the machine learning community and across interdisciplinary research fields. Concretely, we aim to:
Synthesize New Perspectives and Theory.
Establish common language and transferable tools by leveraging continuous‑limit analysis, symmetry principles, topology, causality, and quantum viewpoints to gain insights on advanced GML models.
Inspire Application‑Aware Methods.
Highlight case studies where domain priors—physical laws, conservation constraints, causal graphs—substantially improve accuracy, robustness, or interpretability, providing guidance in exploring broader application fields.
Platform for Upcoming Scientists.
Empower new researchers by creating an environment that enhances their visibility, and collaboration network. When assigning questions during the talks and during poster sessions, we will make sure that young scientist voices are heard. The beneficial effects of this approach will be measured by the impact of the young scientists in the middle future.
Foster an Interdisciplinary Community.
Create mentorship and communication opportunities for theorists, practitioners, and domain scientists, thereby broadening the intellectual domains of GML. By bringing together key players in different stages of their career, we aim to facilitate discussions on combining innovative or fundamental techniques with graph structures, evaluating their intuitions, strengths, and inherent limitations.
The anticipated outcome includes: (i) a prioritized agenda of open research questions, (ii) cross-disciplinary collaborations seeded during the event, (iii) a special focus on open and diverse participation and (iv) concrete pathways toward robust, transparent, and domain-aware GML systems.