Read the Linear algebra and Probability chapters from the Deep Learning book. Mostly review.
Did/learned
- Setup beeminder
- An alternate non-geometric derivation of PCA
- A vector calculus trick, you can transpose scalars so
- Intuition for some information theory like KL divergence, need to study this more. I think I'll spend a decent bit of tomorrow improving my intuitions here (and making Anki cards to make it permanent)
- Read the chapter on Autoencoders, Lots of cool stuff there.
- Found Machine Learning Street Talk
Mistakes/Issues:
- Finished ip-man 2 (10pm - 11pm) instead of going to sleep. Now I'll barely get 8 hours.
- This has been a problem the last few days. Eruch (little brother) goes to bed at ~9:40, and I get on my computer to watch something. Best solution: Shutdown computer before dinner, that means journal before dinner then shut off for good.
- Didn't study any chemistry, was wrapped into ML stuff.
- Should have switched, doing one task for overly long leads to diminishing returns I think
- More general knowledge can give me ideas for stuff to try, like ML chemistry simulations.
- ML didn't feel that productive
- A lot (~2h) was review, new stuff might be better
- I spent way too long building intuition for information-theory stuff, considering the negligible progress I made.
- Breaks, time boxing and short-term goals.
- The book is from 2016, is that too old? Might I be better with d2l? edit: Gonna try d2l, it looks good. has exercises too.
- I definitely should read the basics, but completing the book is a bad idea. I'll read the good parts and seek other resources as well.
- To build cool projects I need a solid understanding of visual and text. They're kind boring, but I need to learn them as building blocks