Graph-based models

Generative AI is a popular research area with diffusion models and normalizing flows being applied to diverse tasks, such as language, image generation, source code, gestures, and music. However, current methods often generate media without symbolic representation, such as raster images instead of vector-based ones, limiting user flexibility and making it harder for systems to maintain semantics when editing images. For example, generative systems like Midjourney may misinterpret images when asked to create variations, leading to distorted results. To address this, a two-step generation process, first producing a graph-based representation and then the surface form, could improve accuracy. Key researchers in this area include Frank Drewes,  Anastasia VaravaHenrik Björklund, and Ruibo Tu.

Read more about our projects