Okay, this isn’t really semantics, just a simple way to use ordinary invariants to reason about probabilistic transition systems (or programs). The trick is to to prove invariants of the form M + L >= 0, where L (“luck”) is a linear combination of “gambling balance” ghost variables that are initially 0 and are only updated with (state-dependent) fair bets on probabilistic choices. Since the expected value of luck is 0, this implies that the expected value of M is always nonnegative.
Joint work with Nathan Bowler.
There’s a well-known monad (on Set) for probabilistic choice – the probability distribution monad. And there’s a well-known monad for I/O – the free monad on a signature (set of operations). We’ve developed a new monad for the combination of probabilistic choice and I/O using trace semantics. But we don’t know whether this third monad is the tensor of the first two; tensoring is a standard way of combining monads.
We know the answer to be yes if instead of infinite probabilistic choice (i.e. choosing between infinitely many options), we treat only finite probabilistic choice, or nondeterministic choice whether finite or infinite. It would round out our story beautifully if we could obtain a positive answer for infinite probabilistic choice too. That’s why we care about this question. It boils down to the completeness of an infinitary logic, as described in the extended abstract:
As always in science, a negative answer would be equally welcome.
We continue our previous work  on giving semantics to probabilistic programs using computable distributions. First, we highlight a gap in our previous semantics involving the measurability of an environment lookup function and address it. Second, we suggest topological domains, a category of domains proposed in the literature that supports both Type-2 computability and domain-theoretic constructions, as an alternative to DCPOs in giving semantics to probabilistic programs based on computable distributions.
 D. Huang and G. Morrisett. An Application of Computable Distributions to the Semantics of Probabilistic Programming Languages. In Programming Languages and Systems, pages 337–363. Springer, 2016.
Exchangeable sequences are models for homogeneous data sets and serve as building blocks from which more interesting dependency structures in statistical models are produced. A sequence of random variables is exchangeable when its distribution does not depend on the ordering of its elements. In programming terms, exchangeable sequences behave as if they were the result of a repeated evaluation of a closure in a probabilistic language, where the closure itself contains a random variable. Exchangeability therefore licenses a programmer or compiler to commute and even remove expressions. Few languages have any special support for exchangeable expressions, with the notable exception of Church [GMRBT08] and its descendants. Church possessed exchangeable random primitives (XRPs), abstract data types that exposed the operations necessary for a compiler to construct advanced Markov chain Monte Carlo algorithms that take advantage of the additional laws that exchangeable sequences satisfy. For further details on XRPs, see [AFR16] and the references therein.
A fundamental question for probabilistic programming is whether or not support for exchangeability is in some sense necessary on the grounds of efficiency or even computability. In our extended abstract, we give partial answers to this question for sequences, but also look ahead to more complicated structures beyond sequences, namely arrays, where the story is much more complicated. In particular, we describe several results on the computable content of the Aldous–Hoover theorem, and its consequences for probabilistic programming languages.
Extended abstract: On computable representations of exchangeable data by Nathanael L. Ackerman, Jeremy Avigad, Cameron E. Freer, Daniel M. Roy, and Jason M. Rute
Related work by the authors:
[AFR16] N. L. Ackerman, C. E. Freer, and D. M. Roy. Exchangeable Random Primitives, PPS 2016.
[FR10] C. E. Freer and D. M. Roy. Posterior distributions are computable from predictive distributions. In: Proc. 13th Int. Conf. on Artificial Intelligence and Statistics (AISTATS 2010). Vol. 9. J. Machine Learning Research W&CP. 2010.
[FR12] C. E. Freer and D. M. Roy. Computable de Finetti measures. Ann. Pure Appl. Logic 163.5 (2012), pp. 530–546.
[GMRBT08] N. D. Goodman, V. K. Mansinghka, D. M. Roy, K. Bonawitz, and J. B. Tenenbaum. Church: A language for generative models. In: Proc. 24th Conf. on Uncertainty in Artificial Intelligence (UAI 2008). Corvalis, Oregon: AUAI Press, 2008, pp. 220–229.
Skorohod’s Theorem is a fundamental result from stochastic process theory. It makes two assertions:
- Any Borel probability measure on a Polish space can be realized as for some random variable , where denotes Lebesgue measure and is the “law of ,” the push forward of under .
- Moreover, if is a weakly convergent sequence of probability measures on , then the associated random variables converge almost surely wrt .
Recently, the author devised a domain-theoretic proof of Skorohod’s Theorem. Since domain theory is a mainstay of denotational semantics, and since Probabilistic Programming concerns modeling probability measures and stochastic processes, it’s reasonable to look for connections between the domain-theoretic proof of Skorohod’s Theorem and the semantics of Probabilistic Programming languages. But there is a difference in how computability is realized in the domain-theoretic approach used to prove Skorohod’s Theorem, versus the standard approach to computable metric spaces and their probability measures:
- The domain-theoretic model used in the proof of Skorohod’s Theorem is built from topological subsets of the Polish space . For example, in the case of locally compact spaces, the domain is the family of compact subsets ordered by reverse inclusion. Computability is “implicit” in the domain model, but can be instantiated by an enumeration of a countable basis of the domain, i.e., of a countable neighborhood basis of that forms a basis for the domain model.
- In the standard approach, a computable metric space consists of a Polish space , a countable subset , with the assumption that is computable. The computable structure is then lifted to via simple measures with rational coefficients that are concentrated on finite subsets of .
This difference can be seen in how computable simple measures are represented. In contrast to the standard approach to computable metric spaces where computable simple measures are concentrated on finite subsets of , in the domain-theroetic model, each point mass arising in a computable simple measure has for its mass a subset of the Polish space. Devising an application of the domain-theoretic proof of Skorohod’s Theorem to Probabilistic Programming first requires an understanding of how the computable structure on and one on the domain model built from appropriate topological susbsets of are related.
Extended Abstract:Probabilistic Programming and a Domain Theoretic Approach to Skorohod’s Theorem
Generative modeling languages, i.e., probabilistic programming languages (PPLs), offer only limited support for continuous variables; theorems concerning semantics and algorithmic correctness are often limited to discrete variables or, in some cases, bounded continuous variables.
We show natural examples that violate these restrictions and break standard algorithms. Using probability kernels, the measure-theoretic generalization of conditional distributions, we develop the notion of measure-theoretic Bayesian networks (MTBNs), and use it to provide more general semantics for PPLs with arbitrarily many random variables defined over arbitrary measure spaces. We also derive provably correct sampling algorithms for the generalized models and integrate them in the BLOG PPL.
Check the extended version of our paper with formal definitions and theorems here: Extended-BLOG-Semantics-Paper
Authors: Nicholas Hay*, Siddharth Srivastava*, Yi Wu and Stuart Russell (*equal contribution)
The design of a general probabilistic programming system involves a balancing act between two deeply conflicting concerns. On the one hand, the system should provide as uniform and flexible an interface for specifying models as possible; on the other, it should be capable of doing efficient inference for any particular model specified. Current systems lie somewhere on a spectrum that ranges from highly expressive languages such as Church, Anglican, and Venture, to highly performant languages like Figaro, FACTORIE, and Infer.NET. It has not yet been shown possible to optimise both these concerns simultaneously.
To improve on this predicament we consider the class of discrete graphical models, for which various efficient exact inference algorithms exist. We present a technique for determining when an Anglican program expresses such a model, which allows us to achieve a substantial increase in inference performance in this case. Our approach can handle complicated language features including higher-order functions, bounded recursion, and data structures, which means that, for the discrete subset of Anglican, we do not incur any loss in expressiveness. Moreover, the resulting inference is exact, which can be useful in contexts where very high accuracy is required, or for doing nested inference inside larger models.
Extended abstract: Efficient exact inference in discrete Anglican programs
Authors: Robert Cornish, Frank Wood, Hongseok Yang
We propose a weakest-precondition-style calculus for reasoning about the expected values (pre-expectations) of mixed-sign unbounded random variables after execution of a probabilistic program. The semantics of a while-loop is well-defined as the limit of iteratively applying a functional to a zero-element just as in the traditional weakest pre-expectation calculus, even though a standard least fixed point argument is not applicable in this context. A striking feature of our semantics is that it is always well-defined, even if the expected values do not exist. Our calculus is sound and allows for invariant-based reasoning about pre-expectations of loops.
Extended Abstract: pps_mixed_sign_wp_kaminski_katoen.pdf
Authors: Benjamin Lucien Kaminski and Joost-Pieter Katoen
Probabilistic Programming Languages (PPLs) have a long history in both the functional (e.g., Anglican) and logic programming (e.g., ProbLog) paradigms. Unfortunately these efforts have been conducted mostly in isolation and little is known about the correspondences between the two approaches or their relative merits.
In this work we establish a common ground for both approaches in terms of algebraic models of probabilistic computation. It is already well-known that functional PPLs
conform to the monadic model. We show that ProbLog’s flavour of probabilistic computation is restricted to the applicative functor interface. This means that functional PPLs afford greater expressiveness in terms of dynamic program structure, while ProbLog programs are inherently more amenable to static analysis and thus afford faster inference.
We believe that this insight opens up a number of interesting
- For Probabilistic Functional Languages: identifying applicative
fragments in monadic programs can lead to more performant inference techniques.
- For Probabilistic Logic Languages: characterising advanced features
(e.g., those of ProbLog2) in terms of algebraic models.
We (Alexander Vandenbroucke and Tom Schrijvers) invite you to explore these promising avenues with us.
Extended Abstract: pps-abstract-alexander-vandenbroucke-tom-schrijvers
In the theory of programming languages, the use of proof assistants has become mainstream. It is considered good form to provide a formal connection between a language and its semantics.
Currently, the main tools for this are based on either higher order logic, or on type theory. Here we will focus on Coq, the largest system of the latter class. The ALEA Coq library formalizes discrete measure theory using a variant of the Giry monad, as a submonad of the CPS monad: (A → [0, 1]) → [0, 1]. This allows to use Moggi’s monadic meta-language to give an interpretation of a language Rml into type theory. Rml is a functional language with a primitive for probabilistic choice. This formalization was the basis for the Certicrypt system for verifying security protocols. Easycrypt is still based on the same idea. To be precise, the type of the monad M of measures also requires monotonicity, summability and linearity. However, Coq cannot prove this to be a monad, as the equality on distributions is not the intensional equality of Coq. We solve this technical issue by using homotopy type theory. Importantly, this allows us to use synthetic topology to present a theory which also includes continuous data types like [0, 1]. Such data types are relevant, for instance, in machine learning and differential privacy.
We would be interested in other languages for probabilistic programming with continuous data types which have a precise mathematical semantics and would like to discuss how to formalize them (in type theory).