I just wanted to point out that in Rito’s work (to which I am a coauthor), we proposed to learn an exponential family which is an approximation of the likelihood (as you said correctly); once that is done, we propose two different approaches:

– the first is using the sufficient statistics of the exponential family in ABC; this recalls therefore ABC with other automatically learned summary statistics (eg Fearnhead and Prangle 2012), and may be thought of as using the sufficient statistics of an auxiliary model.

– as we learned not only a set of summaries, but a full approximate likelihood, we can exploit that directly to perform inference, without generating additional simulations. However that requires an exchange MCMC algorithm as we learned the likelihood only up to the normalizing constant.

These two methods are however separate, so there is no need to use Exchange MCMC if the first option is used.

]]>