Improving Spatial Resolution in Functional Ultrasound Through ULM-Guided Generative Learning

Mar 10, 2026·
Hana Sebia
,
Thomas Guyet
,
Hugues Berry
,
Seunghoi Kim
,
Daniel C. Alexander
,
Benjamin Vidal
· 0 min read
Abstract
Functional ultrasound (fUS) provides high-sensitivity hemodynamic imaging at the mesoscopic scale and is increasingly used for functional brain studies, but its spatial resolution remains limited. Ultrasound Localization Microscopy (ULM), acquired with the same probe, overcomes this limitation by localizing individual microbubbles to achieve microvascular super-resolution. However, ULM requires long acquisitions, contrast agent injections, and heavy post-processing, limiting its applicability in routine or dynamic functional imaging. In this work, we investigate whether generative AI models can enhance the spatial resolution of fUS to enable high-resolution functional imaging without the constraints of ULM. This task is particularly challenging due to the significant resolution gap between modalities and the extremely limited number of paired acquisitions (35 fUS/ULM image pairs). Rather than aiming to fully reproduce ULM resolution, our objective is to achieve moderate super-resolution. We evaluate three families of generative models; a conditional GAN (Pix2Pix), a multimodal hierarchical variational autoencoder (MHVAE), and a conditional diffusion model specifically adapted to data scarcity through patch-based training, positional embeddings, and an edge-preservation loss. Models are trained on grayscale paired data and assessed using quantitative metrics (MSE, PSNR, SSIM, LPIPS) as well as expert visual evaluation. Results highlight clear trade-offs between approaches. MHVAE achieves the lowest pixel-wise error but produces overly smooth images lacking microvascular details. Pix2Pix reconstructs the main vascular structure but misses finer features. The diffusion model provides the best perceptual and structural fidelity, generating sharper and more coherent vasculature, and is judged by experts as the most anatomically plausible, although it may hallucinate structures and struggle with the smallest vessels. Beyond model comparison, this study emphasizes key methodological insights for learning under extreme data scarcity. Patch-based training and edge-aware regularization are critical for diffusion models, while full-image training fails. Additionally, standard data augmentation strategies can degrade anatomical consistency due to spatial misalignment.Overall, our findings demonstrate that moderate super-resolution of fUS is achievable using generative models, with diffusion-based approaches emerging as the most promising direction under current data limitations.
Type
Publication
Intelligence Artificielle en Imagerie Biomédicale (IABM), Lyon, France