A Machine Learning Blog
Bob Wilson (he/him/his) is a data scientist at Netflix, where he helps entertain the world by improving content quality. Prior roles include Marketing Analytics at Meta Reality Labs, Director of Data Science (Marketing) at Ticketmaster, and Director of Analytics at Tinder. His interests include causal inference and convex optimization. When not tweaking his Emacs init file, Bob enjoys gardening, listening/singing along to Broadway musical soundtracks, and surfeiting on tacos.
M.S.E.E. in Machine Learning, 2013
Stanford University
B.S. in Aerospace Engineering, 2008
University of Illinois, Urbana-Champaign
Observational studies involve more uncertainty than randomized experiments. Sensitivity analysis offers an approach to quantifying this uncertainty.
In a previous post, we discussed why randomization provides a reasoned basis for inference in an experiment. Randomization not only quantifies the plausibility of a causal effect but also allows us to infer something about the size of that effect.
In his 1935 book, “Design of Experiments”, Ronald Fisher described randomization as the “reasoned basis for inference” in an experiment. Why do we need a “basis” at all, let alone a reasoned one?
Calculators for planning and analyzing A/B tests
Generalized Additive Models in Python
Orbit Propagator in Python
Homebrewed Beer Calculator
Unit Parser and Conversions in Python
Over the last five years, gamdist has formed the backbone of my research agenda. While it is very much a work in progress, this paper summarizes everything I have learned about regression. I think it is most useful as a collection of references! Still to come: details on regularization and the alternating direction method of multipliers.
We present a method of orbit determination robust against non-normal measurement errors. We approach the non-convex optimization problem by repeatedly linearizing the dynamics about the current estimate of the orbital parameters, then minimizing a convex cost function involving a robust penalty on the measurement residuals and a trust region penalty.
We discuss a beer recommendation engine that predicts whether a user has had a given beer as well as the rating the user will assign that beer based on the beers the user has had and the assigned ratings. k-means clustering is used to group similar users for both prediction problems. This framework may be valuable to bars or breweries trying to learn the preferences of their demographic, to consumers wondering what beer to order next, or to beer judges trying to objectively assess quality despite subjective preferences.