While statistical relational learning (SRL) and probabilistic programming (PP) both develop rich representation languages and reasoning tools for probabilistic models that naturally deal with a variable number of objects as well as the relationships amongst them, in the past 5 to 8 years SRL and PP have been studied almost in isolation and now have a quite different focus. In probabilistic programming, the focus is on functional and imperative programs, on modeling continuous random variables, on (Markov Chain) Monte-Carlo techniques for probabilistic inference, and on a Bayesian machine learning perspective. In contrast, in statistical relational artificial intelligence, the focus is on logical and database representations, on discrete distributions, on knowledge compilation and lifted inference (reasoning at an abstract level — without having to ground out variables over domains), and on learning the structure of the model.
We argue that probabilistic logic programming (PLP), whose rich history goes back to the early 90s with results by David Poole and Taisuke Sato, is uniquely positioned in that it naturally connects these two views into a single formalism with rigorously defined semantics, and thus opens up ways to bridge the gap between the two communities and to connect their results. More specifically, in this note we show how probabilistic logic programs possess not only a possible world semantics as common in SRL but also a program trace semantics as common in PP, which they inherit from traditional logic programs. Furthermore, as extensions of Prolog, probabilistic logic programming languages are Turing equivalent, a property that they share with probabilistic programming languages extending traditional functional or imperative programming languages, and which distinguishes them from many of the statistical relational learning formalisms such as probabilistic databases and Markov Logic.
Authors: Angelika Kimmig and Luc De Raedt