Solving Inverse Problems in Imaging by Posterior Sampling with AutoEncoding Prior
In Bayesian statistics, prior knowledge about the unobserved signal of interest is expressed as a prior distribution which, combined with observational data in the form of a likelihood function allows to determine the posterior distribution. This posterior can be used to derive point estimates such as the MAP or MMSE estimators, but also to estimate uncertainty in these predictions, e.g. in the form of confidence intervals. Most of the work using generative models such as Generative Adversarial Networks (GAN) or Variational AutoEncoders (VAE) as image priors focus on computing point estimates. On the other hand, MCMC methods for sampling from the posterior distribution permit the exploration of the solution space and computing point estimates as well as other statistics about the solutions such as uncertainty estimates. However, the performance of widely used methods like MetropolisHastings depends on having precise proposal distributions which can be challenging to define in highdimensional spaces. In this talk, we present a Gibbslike posterior sampling algorithm that exploits the bidirectional nature of VAE networks. Thanks to the GPU's parallelization capability, we efficiently run multiple chains which explore more rapidly the posterior distribution and also give more accurate convergence tests. To accelerate the burnin period we explore the adaptation of the annealed importance sampling with resampling method.
http://www.cmat.edu.uy/eventos/seminarios/seminariodeprobabilidadyestadistica/solvinginverseproblemsinimagingbyposteriorsamplingwithautoencodingprior
http://www.cmat.edu.uy/@@sitelogo/logcmat.png
Solving Inverse Problems in Imaging by Posterior Sampling with AutoEncoding Prior
Dia 
20221007 10:30:0003:00

Hora 
20221007 10:30:0003:00

Lugar  zoom 
Solving Inverse Problems in Imaging by Posterior Sampling with AutoEncoding Prior
Mario González
(DMELCenur Litoral Norte)
In Bayesian statistics, prior knowledge about the unobserved signal of interest is expressed as a prior distribution which, combined with observational data in the form of a likelihood function allows to determine the posterior distribution. This posterior can be used to derive point estimates such as the MAP or MMSE estimators, but also to estimate uncertainty in these predictions, e.g. in the form of confidence intervals. Most of the work using generative models such as Generative Adversarial Networks (GAN) or Variational AutoEncoders (VAE) as image priors focus on computing point estimates. On the other hand, MCMC methods for sampling from the posterior distribution permit the exploration of the solution space and computing point estimates as well as other statistics about the solutions such as uncertainty estimates. However, the performance of widely used methods like MetropolisHastings depends on having precise proposal distributions which can be challenging to define in highdimensional spaces. In this talk, we present a Gibbslike posterior sampling algorithm that exploits the bidirectional nature of VAE networks. Thanks to the GPU's parallelization capability, we efficiently run multiple chains which explore more rapidly the posterior distribution and also give more accurate convergence tests. To accelerate the burnin period we explore the adaptation of the annealed importance sampling with resampling method.