Originally posted on analyticsindiamag.
Through probabilistic modelling, engineers and data scientists can account for random events in future predictions while measuring different uncertainties.
One of the biggest challenges of this decade is solving uncertainty, ethical and explainable problems in the thousands of machine learning models we interact with daily. Meta, formerly Facebook, announced the release of their supplement to aid this developing sphere.
Bean Machine, Meta’s probabilistic programming system, is a PyTorch-based model to represent and learn about the uncertainties in ML models. Still in its beta phase, Bean Machine’s ‘uncertainty-aware’ learning algorithms allow the user to develop domain-specific probabilistic models and automatically identify its unobserved properties.
Probabilistic modelling
Four major steps are entailed in generating successful probabilistic modelling through the Bean Machine. The modelling is based on generative techniques, the data collected from Python dictionaries where it is associated with random variables. The learning step improves the model’s knowledge based on observations, and the results are stored for further analysis.
Through probabilistic modelling, engineers and data scientists can account for random events in future predictions while measuring different uncertainties. This method is preferred because it offers uncertainty estimation, expressivity, and interpretability facilities. Let’s discuss these.
Generative modelling for a transparent box?
Bean Machine is grounded on generative modelling that catalogues the underlying model for an area of interest before even seeing any data. “Though Bean Machine models can use arbitrary Python code, they are primarily comprised of declarations of random variables — uncertain quantities that represent either unknown values or observed values,” Meta’s team explained in a blog post.
The utilisation of generative modelling allows developers to quantify the uncertainty of the model’s predictions in the form of probability distributions. Through this, analysts can understand the model’s predictions and also map the likelihood of other prediction possibilities.
Bean Machine also outlines the model’s limitations and potential future challenges, which developers can account for while altering the machine. These could be as detailed as spotlighting the amount of error that can occur in predicting sales for desserts during Christmas or gauging if the updated version of a model can perform better than its predecessor. So users can interpret the reasoning behind particular predictions, and accordingly, better the model development process. Furthermore, since PyTorch makes it easy to encode the model directly into the source code, analysts can efficiently match the model’s structure to the problem’s. This also allows one to query the immediately learned properties within the models and is therefore also making it possible to overcome the black box problem and make the model more interpretable.
Declarative Modelling
Declarative philosophy is at the core of Bean Machine’s PyTorch design philosophy. This allows any random quantity to be returned as a distribution by translating it into a Python function declaration to code it as ‘random_variable’. Any arbitrary Python code can become a random variable function, thus making Bean Machine “simple and intuitive” to work with. Engineers or data scientists just have to write the match for the model, and Python and Bean Machine infer the distributions for the predictions based on the model’s declaration.
As illustrated by the team, the bean machine is inspired by “a physical device for the visualisation of probability distributions, a pre-computing example for a probabilistic system.”
A fruitful market
Bean machine is a hot topic in the technological world because it signifies possibly one of the major developments in this area, even though it is still in the beta stage. But, probabilistic modelling has seen a fair share of success stories recently. For instance, MIT has developed a framework to overcome robotic mistakes for traditions like using a fork to eat noodles but a spoon to drink soup, which is common sense for humans. They have introduced an AI system that can perceive and learn real-world objects from images. The model is effective because it is built using probabilistic programming where the developers ensured, using the approach, that the images recorded are a match to any other scenes. Further, the model was corrected repeatedly based on uncertainties identified by probabilistic modelling. Another research published on Nature by Ronquist et al., 2021, revealed that probabilistic programming languages solve the problem of expression and support the generation of inference algorithms.
Source: analyticsindiamag