A Machine Learning Blog
Bob Wilson (he/him/his) is a data scientist at Netflix, where he helps entertain the world by improving content quality. Prior roles include Marketing Analytics at Meta Reality Labs, Director of Data Science (Marketing) at Ticketmaster, and Director of Analytics at Tinder. His interests include causal inference and convex optimization. When not tweaking his Emacs init file, Bob enjoys gardening, listening/singing along to Broadway musical soundtracks, and surfeiting on tacos.
M.S.E.E. in Machine Learning, 2013
Stanford University
B.S. in Aerospace Engineering, 2008
University of Illinois, Urbana-Champaign
Table of Contents Introduction As Treated, Per Protocol, and Intent to Treat Potential Outcomes Notation Instrumental Variables Dose-Response Models Conclusions and Further Reading References Introduction Tech companies spoil data scientists. It’s so easy for us to A/B test everything.
I’ve been an Emacs user for about 15 years, and for the most part I use Emacs for org-mode and python development. I’ve happily used Jorgen Schäfer’s elpy as the core of my python development workflow for the last 5 years or so, and I’ve been happy with it.
I started a new job recently and took the opportunity to install a new version of Emacs. Emacs 29 includes tree-sitter and built-in eglot support, which I’ll write about some other time.
Calculators for planning and analyzing A/B tests
Generalized Additive Models in Python
Orbit Propagator in Python
Homebrewed Beer Calculator
Unit Parser and Conversions in Python
Over the last five years, gamdist has formed the backbone of my research agenda. While it is very much a work in progress, this paper summarizes everything I have learned about regression. I think it is most useful as a collection of references! Still to come: details on regularization and the alternating direction method of multipliers.
We present a method of orbit determination robust against non-normal measurement errors. We approach the non-convex optimization problem by repeatedly linearizing the dynamics about the current estimate of the orbital parameters, then minimizing a convex cost function involving a robust penalty on the measurement residuals and a trust region penalty.
We discuss a beer recommendation engine that predicts whether a user has had a given beer as well as the rating the user will assign that beer based on the beers the user has had and the assigned ratings. k-means clustering is used to group similar users for both prediction problems. This framework may be valuable to bars or breweries trying to learn the preferences of their demographic, to consumers wondering what beer to order next, or to beer judges trying to objectively assess quality despite subjective preferences.