Training Neural Networks to Predict Graphs. Who is afraid of the big bad NP-hardness ?

Date:

Graph learning is a rapidly growing area of machine learning, with most advances focused on problems where graphs are the input. Yet, many important tasks involve predicting graphs as the output. This setting poses a fundamental challenge: evaluating predictions requires solving the graph matching problem, an NP-hard task. In this talk I will: (a) review existing strategies to circumvent this obstacle, such as graph canonization and generative approaches; (b) demonstrate that, in a data-driven setting, the hardness barrier can effectively disappear as, for a given graph distribution, it is possible to learn a scalable solver; and (c) showcase how this strategy can be used to train a graph-level autoencoder i.e. a model that learns to encode (and decode) entire graphs into a vector space.