Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Here's what you do: you watch this video of Andrew Karpathy [1] called "Becoming a backprop ninja". Then you pick up a function that you like and implement this backprop (which is a different way of saying reverse mode automatic differentiation) using just numpy. If you use some numpy broadcasting, an np.sum, some for-loops, you'll start getting a good feel for what's going on.

Then you can go and read this fabulous blog post [2], and if you like what you see, you go to the framework built by its author, called Small Pebble [3]. Despite the name, it's not all that small. If you peruse the code you'll get some appreciation of what it takes to build a solid autodiff library, and if push comes to shove, you'll be able to build one yourself.

[1] https://www.youtube.com/watch?v=q8SA3rM6ckI

[2] https://sidsite.com/posts/autodiff/

[3] https://github.com/sradc/SmallPebble



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: