One of the most interesting topics in the Coursera Deep Learning specialization is the “YOLO” algorithm for object detection. I often find it helpful to describe algorithms in my own words to solidify my understanding, and that is precisely what I will do here.
I recently completed the Deep Learning specialization on Coursera from deeplearning.ai. Over five courses, they go over generic neural networks, regularization, convolutional neural nets, and recurrent neural nets.
Having completed it, I would say the specialization is a great overview, and a jumping off point for learning more about particular techniques.
The “unreasonable effectiveness of deep learning” has been much discussed. Namely, as the cost function is non-convex, any optimization procedure will in general find a local, non-global, minimum.
Actually, algorithms like gradient descent will terminate (perhaps because of early stopping) before even reaching a local minimum.
I am currently working through Convolutional Neural Networks, the fourth course in the Coursera specialization on Deep Learning. The first week of that course contains some hard-to-remember equations about filter sizes and padding and striding and I thought it would be helpful for me to write it out for future reference.
Recently I started the Deep Learning Specialization on Coursera. While I studied neural networks in my masters program (from Andrew Ng himself!), that was a long time ago and the field has changed considerably since then.