Charla invitada MGPVis "Accelerating Posterior Sampling with Generative Priors for Blind Inverse Problems."
Charla de cierre del curso Modelos Generativos para Visión Artificial
El martes 12 de mayo de 2026, a las 8:00 horas, se realizará la charla de cierre del curso Modelos Generativos para Visión Artificial, con la participación de Andrés Almansa, investigador del CNRS / MAP5, Université Paris Cité.
Almansa es un investigador destacado en el área de computational imaging, problemas inversos y métodos bayesianos para la reconstrucción y restauración de imágenes.
La charla se titula:
Accelerating Posterior Sampling with Generative Priors for Blind Inverse Problems
La actividad se realizará por Zoom.
Datos de conexión
Tema: Charla invitada MGPVis: Andrés Almansa
Fecha y hora: martes 12 de mayo de 2026, 8:00 horas
Sala Zoom:
https://salavirtual-udelar.zoom.us/j/2165307614?pwd=c1EvSnlUUFg0TDlKUDVRd3lKOG01Zz09&omn=86100765427
Meeting ID: 216 530 7614
Passcode: pwd-AB2021
Resumen
Posterior sampling is a key element when solving blind inverse problems via marginal likelihood maximization. In this talk, we review the evolution of generative image priors and the conditioning mechanisms, also known as zero-shot PnP frameworks, that are required to turn such pretrained generic samplers into posterior samplers for a specific inverse problem. As generative models require fewer and fewer steps, or NFEs, to generate an independent sample, the design of effective conditioning mechanisms has become more sophisticated.
We then focus on Latent Consistency Models, LCMs, which distill latent-space text-to-image diffusion models, LDMs, into fast prior samplers. We leverage a new conditioning mechanism to propose LAtent consisTency INverse sOlver, LATINO, the first zero-shot PnP framework to solve inverse problems with priors encoded by LCMs. The conditioning mechanism avoids automatic differentiation and reaches state-of-the-art quality in as few as 8 neural function evaluations.
As a result, LATINO delivers accurate solutions and is significantly more memory and computationally efficient than previous approaches. We then embed LATINO within an empirical Bayesian framework that automatically calibrates the text prompt from the observed measurements by marginal maximum likelihood estimation. Extensive experiments show that prompt self-calibration improves estimation, allowing LATINO with Prompt Optimization to define new state-of-the-art results in image reconstruction quality and computational efficiency.
Trabajo conjunto con: Alessio Spagnoletti, Jean Prost, Nicolas Papadakis, Marcelo Pereyra, Charles Laroche y Eva Coupeté.
Organizan: Gastón González, Lara Raad y Pablo Musé.
