1 Introduction
Bayesian inference is an important tool in machine learning. It provides a principled way to reason about uncertainty in parameters or hidden representations. In recent years, there has been great progress in scalable Bayesian methods, which made it possible to perform approximate inference for largescale datasets and deep learning models.
One of such methods is variational inference (VI) [2], which is an optimizationbased approach. Given a probabilistic model , where are observed data and are latent variables, VI seeks to maximize the evidence lower bound (ELBO).
(1) 
where approximates the intractable true posterior distribution . The parametric approximation family for
is chosen in such a way, that we can efficiently estimate
and its gradients w.r.t. .Such approximations to the true posterior are often too simplistic. There exists a variety of ways to extend the variational family to mitigate this. They can be divided roughly into two main groups: those that require the probability density function of the approximate posterior to be analytically tractable (which we will call
explicit models) [29, 11, 8, 4, 35, 37, 5, 26, 19] and those that do not (implicit models) [10, 21, 36, 16, 20, 30, 39]. For latter, we only assume that it is possible to sample from such distributions, whereas the density may be inaccessible.Not only approximate posteriors but also priors in such models are often chosen to be very simple to make computations tractable. This can lead to overregularization and poor hidden representations in generative models such as variational autoencoders (VAE, [14]) [9, 34, 1]
. In Bayesian neural networks, a standard normal prior is the default choice, but together with the mean field posterior, it can lead to overpruning and consequently underfitting
[38]. To overcome such problem in practice, one usually scales the KL divergence term in the expression for ELBO or truncates the variances of the approximate posterior
[23, 18, 17].Another way to overcome this problem is to consider more complicated prior distributions, e.g. implicit priors. For example, hierarchical priors usually impose an implicit marginal prior when hyperparameters are integrated out. To perform inference in such models, one often resorts to joint inference over both parameters and hyperparameters, even though we are only interested in the marginal posterior over parameters of the model. Another example of implicit prior distributions is the optimal prior for variational autoencoders. It can be shown that the aggregated posterior distribution is the optimal prior for VAE
[9], and it can be regarded as an implicit distribution. The VampPrior model [34] approximates this implicit prior using an explicit discrete mixture of Gaussian posteriors. However, this model can be further improved if we consider an arbitrary trainable semiimplicit prior.In this paper, we extend the recently proposed framework of semiimplicit variational inference (SIVI) [39] and consider priors and posteriors that are defined as semiimplicit distributions. By “semiimplicit” we mean distributions that do not have a tractable PDF (i.e. implicit), but that can be represented as a mixture of some analytically tractable density with a flexible mixing distribution, either explicit or implicit.
Our contributions can be summarized as follows. Firstly, we prove that the SIVI objective is actually a lower bound on the true ELBO, which allows us to sandwich the ELBO between an upper bound and a lower bound which are both asymptotically exact. Secondly, we propose doubly semiimplicit variational inference (DSIVI), a generalpurpose framework for variational inference and variational learning in the case when both the posterior and the prior are semiimplicit. We construct a SIVIinspired asymptotically exact lower bound on the ELBO for this case, and use the variational representation of the KL divergence to obtain the upper bound. Finally, we consider a wide range of applications where semiimplicit distributions naturally arise, and show how the use of DSIVI in these settings is beneficial.
2 Preliminaries
Consider a probabilistic model defined by its joint distribution
, where variables are observed, and are the latent variables. Variational inference is a family of methods that approximate the intractable posterior distribution with a tractable parametric distribution . To do so, VI methods maximize the evidence lower bound (ELBO):(2) 
The maximum of the evidence lower bound corresponds to the minimum of the KLdivergence between the variational distribution and the exact posterior . In the more general variational learning setting, the prior distribution may also be a parametric distribution [10]. In this case, one would optimize the ELBO w.r.t. both the variational parameters and the prior parameters , thus performing approximate maximization of the marginal likelihood .
The common way to estimate the gradient of this objective is to use the reparameterization trick [14]. The reparameterization trick recasts the sampling from the parametric distribution as the sampling of nonparametric noise , followed by a deterministic parametric transformation . Still, such gradient estimator requires logdensities of both the prior distribution and the approximate posterior in closed form. Several methods have been proposed to overcome this limitation [26, 20, 30]. However, such methods usually provide a biased estimate of the evidence lower bound with no practical way of estimating the introduced bias.
The reparameterizable distributions with no closedform densities are usually referred to as implicit distributions. In this paper we consider the socalled semiimplicit distributions that are defined as an implicit mixture of explicit conditional distributions:
(3) 
Here, the conditional distribution is explicit. However, when its condition follows an implicit distribution , the resulting marginal distribution is implicit. We will refer to as the mixing distribution, and to as the mixing variables.
Note that we may easily sample from semiimplicit distributions: in order to sample from , we need to first sample the mixing variable , and then sample from the conditional . Further in the text, we will assume this sampling scheme when using expectations over semiimplicit distributions. Also note that an arbitrary implicit distribution can be represented in a semiimplicit form: .
3 Related Work
There are several approaches to inference and learning in models with implicit distributions.
One approach is commonly referred to as hierarchical variational inference or auxiliary variable models. It allows for inference with implicit approximate posteriors that can be represented as a marginal distribution of an explicit joint distribution . The ELBO is then bounded from below using a reverse variational model [26, 28, 18]. This method does not allow for implicit prior distributions, requires access to the explicit joint density and has no way to estimate the increased inference gap, introduced by the imperfect reverse model.
Another family of models uses an optimal discriminator to estimate the ratio of implicit densities [20, 21, 10]. This is the most general approach to inference and learning with implicit distributions, but it also optimizes a biased surrogate ELBO, and the induced bias cannot be estimated. Also, different authors report that the performance of this approach is poor if the dimensionality of the implicit densities is high [31, 36]. This is the only approach that allows to perform variational learning (learning the parameters of the prior distribution ). However, it is nontrivial and requires differentiation through a series of SGD updates. This approach has not been validated in practice yet and has only been proposed as a theoretical concept [10]. On the contrary, DSIVI provides a lower bound that can be directly optimized w.r.t. both the variational parameters and the prior parameters , naturally enabling variational learning.
Kernel implicit variational inference (KIVI) [30]
is another approach that uses kernelized ridge regression to approximate the density ratio. It is reported to be more stable than the discriminatorbased approaches, as the proposed density ratio estimator can be computed in closed form. Still, this procedure introduces a bias that is not addressed. Also, KIVI relies on adaptive contrast that does not allow for implicit prior distributions
[20, 30].There are also alternative formulations of variational inference that are based on different divergences. One example is operator variational inference [25] that uses the LangevinStein operator to design a new variational objective. Although it allows for arbitrary implicit posterior approximations, the prior distribution has to be explicit.
4 Doubly SemiImplicit Variational Inference
In this section, we will describe semiimplicit variational inference, study its properties, and then extend it for the case of semiimplicit prior distributions.
4.1 SemiImplicit Variational Inference
Semiimplicit variational inference [39] considers models with an explicit joint distribution and a semiimplicit approximate posterior , as defined in Eq. (3). The basic idea of semiimplicit variational inference is to approximate the semiimplicit approximate posterior with a finite mixture:
(4)  
SIVI provides an upper bound , and a surrogate objective that both converge to ELBO as goes to infinity ():
(5)  
(6)  
The surrogate objective is then used for optimization.
4.2 SIVI Lower Bound
Although it was shown that is a lower bound for ELBO, it has not been clear whether this holds for arbitrary , and whether maximizing leads to a correct procedure. Here, we show that is indeed a lower bound on ELBO .
Theorem 1.
The proof can be found in Appendix A.
4.3 SemiImplicit Priors
Inspired by the derivation of the SIVI upper bound, we can derive the lower bound for the case of semiimplicit prior distributions. Right now, for simplicity, assume an explicit approximate posterior , and a semiimplicit prior
(9)  
(10) 
This bound has the same properties: it is nondecreasing in and is asymptotically exact. To see why , one just needs to apply the Jensen’s inequality for the logarithm:
(11) 
To show that this bound is nondecreasing in , one can refer to the proof of proposition 3 in the SIVI paper [39, Appendix A].
Note that it is no longer possible to use the same trick to obtain the upper bound. Still, we can obtain an upper bound using the variational representation of the KLdivergence [24]:
(12)  
(13) 
Here we substitute the maximization over all functions with a single parametric function. In order to obtain a tighter bound, we can minimize this bound w.r.t. the parameters of function .
Note that in order to find the optimal value for , one does not need to estimate the entropy term or the likelihood term of the objective:
(14) 
This allows us to obtain a lower bound on the KLdivergence between two arbitrary (semi)implicit distributions, and, consequently, results in an upper bound on the ELBO.
4.4 Final Objective
We can combine the bounds for the semiimplicit posterior and the semiimplicit prior to obtain the final lower bound
(15) 
and the upper bound
(16) 
The lower bound is nondecreasing in both and , and is asymptotically exact:
(17)  
(18) 
We use the lower bound for optimization, whereas the upper bound may be used to estimate the gap between the lower bound and the true ELBO. The final algorithm for DSIVI is presented in Algorithm 1. Unless stated otherwise, we use 1 MC sample to estimate the gradients of the lower bound (see Algorithm 1 for more details). In the case where the prior distribution is explicit, one may resort to the upper bound , proposed in SIVI [39].
5 Applications
In this section we describe several settings that can benefit from semiimplicit prior distributions.
5.1 VAE with SemiImplicit Priors
The default choice of the prior distribution
for the VAE model is the standard Gaussian distribution. However, such choice is known to overregularize the model
[34, 7].It can be shown that the socalled aggregated posterior distribution is the optimal prior distribution for a VAE in terms of the value of ELBO [9, 34]:
(19) 
where the summation is over all training samples , . However, this extreme case leads to overfitting [9, 34], and is highly computationally inefficient. A possible middle ground is to consider the variational mixture of posteriors prior distribution (the VampPrior) [34]:
(20) 
The VampPrior is defined as a mixture of variational posteriors for a set of inducing points . These inducing points may be learnable (an ordinary VampPrior) or fixed at a random subset of the training data (VampPriordata). The VampPrior battles overregularization by considering a flexible empirical prior distribution, being a mixture of fullyfactorized Gaussians, and by coupling the parameters of the prior distribution and the variational posteriors.
There are two ways to improve this technique by using DSIVI. We can regard the aggregated posterior as a semiimplicit distribution:
(21) 
Next, we can use it as a semiimplicit prior and exploit the lower bound, presented in Section 4.3:
(22) 
Note that the only difference from the training objective of VampPriordata is that the inducing points are not fixed, but are resampled at each estimation of the lower bound. As we show in the experiments, such reformulation of VampPriordata drastically improves its test loglikelihood.
We can also consider an arbitrary semiimplicit prior distribution:
(23) 
For example, we consider a fullyfactorized Gaussian conditional prior with mean and trainable variances . The implicit generator can be parameterized by an arbitrary neural network with weights that transforms a standard Gaussian noise to mixing parameters . As we show in the experiments, such semiimplicit posterior outperforms VampPrior even though it does not couple the parameters of the prior and the variational posteriors.
We can also apply the importanceweighted lower bound [3] similarly to the importance weighted SIVAE [39], and obtain IWDSIVAE, a lower bound on the IWAE objective for a variational autoencoder with a semiimplicit prior and a semiimplicit posterior. The exact expression for this lower bound is presented in Appendix B.
5.2 Variational Inference with Hierarchical Priors
A lot of probabilistic models use hierarchical prior distributions: instead of a nonparametric prior they use a parametric conditional prior with hyperparameters
, and a hyperprior over these parameters
. A discriminative model with such hierarchical prior may be defined as follows [22, 32, 33, 6, 30, 17]:(24) 
A common way to perform inference in such models is to approximate the joint posterior given the training data [6, 30, 17]. Then the marginal approximate posterior is used to approximate the predictive distribution on unseen data :
(25) 
The inference is performed by maximization of the following variational lower bound:
(26) 
We actually are not interested in the joint posterior , and we only need it to obtain the marginal posterior . In this case we can reformulate the problem as variational inference with a semiimplicit prior and a semiimplicit posterior :
(27) 
Then it can be shown that optimization of the second objective results in a better fit of the marginal posterior:
Theorem 2.
Let and maximize and correspondingly. Then
(28) 
The proof can be found in Appendix C.
It means that if the likelihood function does not depend on the hyperparameters , it is beneficial to consider the semiimplicit formulation instead of the joint formulation of variational inference even if the approximation family stays exactly the same. In the experiments, we show that the proposed DSIVI procedure matches the performance of direct optimization of , whereas joint VI performs much worse.
6 Experiments
6.1 Variational Inference with Hierarchical Priors
We consider a Bayesian neural network with a fullyfactorized hierarchical prior distribution with a Gaussian conditional and a Gamma hyperprior over the inverse variances
. Such hierarchical prior induces a fullyfactorized Student’s tdistribution with one degree of freedom as the marginal prior
. Note that in this case, we can estimate the marginal evidence lower bound directly. We consider a fullyconnected neural network with two hidden layers of 300 and 100 neurons on the MNIST dataset
[15]. We train all methods with the same hyperparameters: we use batch size 200, use Adam optimizer [12] with default parameters, starting with learning rate, and train for 200 epochs, using linear learning rate decay.
We consider three different ways to perform inference in this model, the marginal inference, the joint inference, and DSIVI, as described in Section 5.2. For joint inference, we consider a fullyfactorized joint approximate posterior , with being a fullyfactorized Gaussian, and
being a fullyfactorized LogNormal distribution. Such joint approximate posterior induces a fullyfactorized Gaussian marginal posterior
. Therefore, we use a fullyfactorized Gaussian posterior for the marginal inference and DSIVI. Note that in this case, only the prior distribution is semiimplicit. All models have been trained with the local reparameterization trick [13].We perform inference using these three different variational objectives, and then compare the true evidence lower bound on the training set. As the marginal variational approximation is the same in all four cases, the training ELBO can act as a proxy metric for the KLdivergence between the marginal approximate posterior and the true marginal posterior. The results are presented in Figure 1. DSIVI with as low as samples during training exactly matches the performance of the true marginal variational inference, whereas other approximations fall far behind. All three methods achieve test set accuracy, and the test loglikelihood is approximately the same for all methods, ranging from to . However, the difference in the marginal ELBO is high. The final values of the ELBO, its decomposition into train loglikelihood and the KL term, and the test loglikelihood are presented in Table 2 in Appendix C.
6.2 Comparison to Alternatives
We compare DSIVI to other methods for implicit VI on a toy problem of approximating a centered standard Student’s tdistribution with 1 degrees of freedom with a Laplace distribution by representing them as scale mixtures of Gaussians. Namely, we represent , and . We train all methods by minimizing the corresponding approximations to the KLdivergence w.r.t. the parameters and of the approximation .
As baselines, we use prior methods for implicit VI: Adversarial Variational Bayes (AVB) [20], which is a discriminatorbased method, and Kernel Implicit Variational Inference (KIVI) [30]. For AVB we fix architecture of the “discriminator” neural network to have 2 hidden layers with 3 and 4 hidden units with LeakyReLU () activation, and for KIVI we use fixed with varying number of samples. For AVB we tried different numbers of training samples and optimization steps to optimize the discriminator at each step of optimizing over . We used Adam optimizer with learning rate and one MC sample to estimate gradients w.r.t. .
We report the KLdivergence , estimated using 10000 MC samples averaged over 10 runs. The results are presented in Figure 2. DSIVI converges faster, is more stable, and only has one hyperparameter, the number of samples in the DSIVI objective.
6.3 Sequential Approximation
We illustrate the expressiveness of DSIVI with implicit prior and posterior distributions on the following toy problem. Consider an explicit distribution . We would like to learn a semiimplicit distribution to match . During the first step, we apply DSIVI to tune the parameters so as to minimize . Then, we take the trained semiimplicit as a new target for and tune minimizing . After we repeat the iterative process times, obtained through minimization of should still match .
In our experiments, we follow [39] and model
by a multilayer perceptron (MLP) with layer widths [30,60,30] with ReLU activations and a tendimensional standard normal noise as its input. We also fix all conditionals
, . We choose to be either a onedimensional mixture of Gaussians or a twodimensional “banana” distribution. In Figure 3 we plot values of , for different values of (see Algorithm 1) when is a onedimensional mixture of Gaussians (see Appendix D for a detailed description and additional plots). In Figure 4 we plot the approximate PDF of after 9 steps for different values of . As we can see, even though both “prior” and “posterior” distributions are semiimplicit, the algorithm can still accurately learn the original target distribution after several iterations.6.4 VAE with SemiImplicit Optimal Prior
We follow the same experimental setup and use the same hyperparameters, as suggested for VampPrior [34]. We consider two architectures, the VAE and the HVAE (hierarchical VAE, [34]
), applied to the MNIST dataset with dynamic binarization
[27]. In both cases, all distributions (except the prior) have been modeled by fullyfactorized neural networks with two hidden layers of 300 hidden units each. We used 40dimensional latent vectors
(40dimensional and for HVAE) and Bernoulli likelihood with dynamic binarization for the MNIST dataset. As suggested in the VampPrior paper, we used 500 pseudoinputs for VampPriorbased models in all cases (higher number of pseudoinputs led to overfitting). To measure the performance of all models, we bound the test loglikelihood with the IWAE objective [3] with 5000 samples for the VampPriorbased methods, and estimate the corresponding IWDSIVAE lower bound with for the DSIVIbased methods (see Appendix B for more details).Method  LL 

VAE+VampPriordata  
VAE+VampPrior  
VAE+DSIVIprior (K=2000)  
VAE+DSIVIagg (K=500)  
VAE+DSIVIagg (K=5000)  
HVAE+VampPriordata  
HVAE+VampPrior  
HVAE+DSIVIagg (K=5000) 
We consider two formulations, described in Section 5.1: DSIVIagg stands for the semiimplicit formulation of the aggregated posterior (21), and DSIVIprior stands for a general semiimplicit prior (23). For the DSIVIprior we have used a fullyfactorized Gaussian conditional , where the mixing parameters are the output of a fullyconnected neural network with two hidden layers with 300 and 600 hidden units respectively, applied to a 300dimensional standard Gaussian noise . The first and second hidden layers were followed by ReLU nonlinearities, and no nonlinearities were applied to obtain . We did not use warmup [34] with DSIVIprior.
The results are presented in Table 1. DSIVIagg is a simple modification of VampPriordata that significantly improves the test loglikelihood, and even outperforms the VampPrior with trained inducing inputs. DSIVIprior outperforms VampPrior even without warmup and without coupling the parameters of the prior and the variational posteriors.
7 Conclusion
We have presented DSIVI, a generalpurpose framework that allows to perform variational inference and variational learning when both the approximate posterior distribution and the prior distribution are semiimplicit. DSIVI provides an asymptotically exact lower bound on the ELBO, and also an upper bound that can be made arbitrarily tight. It allows us to estimate the ELBO in any model with semiimplicit distributions, which was not the case for other methods. We have shown the effectiveness of DSIVI applied to a range of problems, e.g. models with hierarchical priors and variational autoencoders with semiimplicit empirical priors. In particular, we show how DSIVIbased treatment improves the performance of VampPrior, the current stateoftheart prior distribution for VAE.
References
 [1] A. Alemi, B. Poole, I. Fischer, J. Dillon, R. A. Saurous, and K. Murphy. Fixing a broken elbo. In International Conference on Machine Learning, pages 159–168, 2018.
 [2] D. M. Blei, A. Kucukelbir, and J. D. McAuliffe. Variational inference: A review for statisticians. Journal of the American Statistical Association, 112(518):859–877, 2017.
 [3] Y. Burda, R. Grosse, and R. Salakhutdinov. Importance weighted autoencoders. arXiv preprint arXiv:1509.00519, 2015.
 [4] R. J. Giordano, T. Broderick, and M. I. Jordan. Linear response methods for accurate covariance estimates from mean field variational bayes. In Advances in Neural Information Processing Systems, pages 1441–1449, 2015.
 [5] S. Han, X. Liao, D. Dunson, and L. Carin. Variational gaussian copula inference. In Artificial Intelligence and Statistics, pages 829–838, 2016.

[6]
J. M. HernándezLobato and R. Adams.
Probabilistic backpropagation for scalable learning of bayesian neural networks.
In International Conference on Machine Learning, pages 1861–1869, 2015.  [7] I. Higgins, L. Matthey, A. Pal, C. Burgess, X. Glorot, M. Botvinick, S. Mohamed, and A. Lerchner. vae: Learning basic visual concepts with a constrained variational framework. 2016.
 [8] M. D. Hoffman and D. M. Blei. Structured stochastic variational inference. In Artificial Intelligence and Statistics, 2015.
 [9] M. D. Hoffman and M. J. Johnson. Elbo surgery: yet another way to carve up the variational evidence lower bound. In Workshop in Advances in Approximate Bayesian Inference, NIPS, 2016.
 [10] F. Huszár. Variational inference using implicit distributions. arXiv preprint arXiv:1702.08235, 2017.
 [11] T. S. Jaakkola and M. I. Jordan. Improving the mean field approximation via the use of mixture distributions. In Learning in graphical models, pages 163–173. Springer, 1998.
 [12] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
 [13] D. P. Kingma, T. Salimans, and M. Welling. Variational dropout and the local reparameterization trick. In Advances in Neural Information Processing Systems, pages 2575–2583, 2015.
 [14] D. P. Kingma and M. Welling. Autoencoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.
 [15] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradientbased learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998.
 [16] Y. Li and R. E. Turner. Gradient estimators for implicit models. arXiv preprint arXiv:1705.07107, 2017.
 [17] C. Louizos, K. Ullrich, and M. Welling. Bayesian compression for deep learning. In Advances in Neural Information Processing Systems, pages 3288–3298, 2017.
 [18] C. Louizos and M. Welling. Multiplicative normalizing flows for variational bayesian neural networks. arXiv preprint arXiv:1703.01961, 2017.
 [19] L. Maaløe, C. K. Sønderby, S. K. Sønderby, and O. Winther. Auxiliary deep generative models. In M. F. Balcan and K. Q. Weinberger, editors, Proceedings of The 33rd International Conference on Machine Learning, volume 48 of Proceedings of Machine Learning Research, pages 1445–1453, New York, New York, USA, 20–22 Jun 2016. PMLR.
 [20] L. Mescheder, S. Nowozin, and A. Geiger. Adversarial variational bayes: Unifying variational autoencoders and generative adversarial networks. ICML, 2017.
 [21] S. Mohamed and B. Lakshminarayanan. Learning in implicit generative models. arXiv preprint arXiv:1610.03483, 2016.
 [22] R. M. Neal. Bayesian learning for neural networks, volume 118. 1995.
 [23] K. Neklyudov, D. Molchanov, A. Ashukha, and D. P. Vetrov. Structured bayesian pruning via lognormal multiplicative noise. In Advances in Neural Information Processing Systems, pages 6775–6784, 2017.
 [24] X. Nguyen, M. J. Wainwright, and M. I. Jordan. Estimating divergence functionals and the likelihood ratio by convex risk minimization. IEEE Transactions on Information Theory, 56(11):5847–5861, 2010.
 [25] R. Ranganath, D. Tran, J. Altosaar, and D. Blei. Operator variational inference. In Advances in Neural Information Processing Systems, pages 496–504, 2016.
 [26] R. Ranganath, D. Tran, and D. Blei. Hierarchical variational models. In International Conference on Machine Learning, pages 324–333, 2016.

[27]
R. Salakhutdinov and I. Murray.
On the quantitative analysis of deep belief networks.
In Proceedings of the 25th international conference on Machine learning, pages 872–879. ACM, 2008.  [28] T. Salimans, D. Kingma, and M. Welling. Markov chain monte carlo and variational inference: Bridging the gap. In International Conference on Machine Learning, pages 1218–1226, 2015.
 [29] L. K. Saul and M. I. Jordan. Exploiting tractable substructures in intractable networks. In Advances in neural information processing systems, pages 486–492, 1996.
 [30] J. Shi, S. Sun, and J. Zhu. Kernel implicit variational inference. arXiv preprint arXiv:1705.10119, 2017.
 [31] M. Sugiyama, T. Suzuki, and T. Kanamori. Density ratio estimation in machine learning. Cambridge University Press, 2012.
 [32] M. Tipping. Sparse Bayesian Learning and the Relevance Vector Machine. 1:211–244, 2000.
 [33] M. Titsias and M. LázaroGredilla. Doubly stochastic variational bayes for nonconjugate inference. In International Conference on Machine Learning, pages 1971–1979, 2014.
 [34] J. M. Tomczak and M. Welling. Vae with a vampprior. arXiv preprint arXiv:1705.07120, 2017.
 [35] D. Tran, D. Blei, and E. M. Airoldi. Copula variational inference. In Advances in Neural Information Processing Systems, pages 3564–3572, 2015.
 [36] D. Tran, R. Ranganath, and D. Blei. Hierarchical implicit models and likelihoodfree variational inference. In Advances in Neural Information Processing Systems, pages 5523–5533, 2017.
 [37] D. Tran, R. Ranganath, and D. M. Blei. The variational gaussian process. ICLR, 2016.
 [38] B. Trippe and R. Turner. Overpruning in variational bayesian neural networks. arXiv preprint arXiv:1801.06230, 2018.
 [39] M. Yin and M. Zhou. Semiimplicit variational inference. In Proceedings of the 35th International Conference on Machine Learning, volume 80, pages 5660–5669. PMLR, 2018.
Appendix A Proof of the SIVI Lower Bound for SemiImplicit Posteriors
Theorem 1.
Proof.
For brevity, we denote as and as . First, notice that due to the symmetry in the indices, the regularized lower bound does not depend on the index in the conditional :
(31)  
(32) 
Therefore, we can rewrite as follows:
(33)  
(34)  
(35) 
Note that it is just the value of the evidence lower bound with the approximate posterior , averaged over all values of . We can also use that to rewrite the true ELBO in the same expectations:
(36)  
(37) 
We want to prove that . Consider their difference :
(38)  
(39)  
(40) 
We can use the same trick to prove that this bound is nondecreasing in . First, let’s use the symmetry in the indices once again, and rewrite and in the same expectations:
(41)  
(42)  
(43)  
(44) 
Then their difference would be equal to the expected KLdivergence, hence being nonnegative:
(45)  
(46)  
∎
Appendix B Importance Weighted Doubly SemiImplicit VAE
The standard importanceweighted lower bound for VAE is defined as follows:
(47) 
We propose IWDSIVAE, a new lower bound on the IWAE objective, that is suitable for VAEs with semiimplicit priors and posteriors:
(48) 
This objective is a lower bound on the IWAE objective (), is nondecreasing in both and , and is asymptotically exact ().
Appendix C Variational inference with hierarchical priors
Theorem 2.
Consider two different variational objectives and . Then
(49)  
(50) 
Let and maximize and correspondingly. Then is a better fit for the marginal posterior that in terms of the KLdivergence:
(51) 
Proof.
Note that maximizing directly minimizes , as . The soughtfor inequality (51) then immediately follows from . ∎
To see the cause of this inequality more clearly, consider :
(52)  
(53)  
(54)  
(55)  
(56) 
If and coincide, the inequality (51) becomes an equality. However,
Comments
There are no comments yet.