A list of some of the most important modern fundamentals of Deep Learning everyone in the field show be familiar with
Data Scientist, Machine Learning Expert, Algorithm Engineer, Deep Learning Researcher — whatever your title might be, if using advanced concepts of Machine Learning is part of your career, then keeping up to date with the latest innovations is also a part of your everyday tasks. But in order to be on-top of all the latest ingenuities and truly understand how they work, we must also be familiar with the building blocks and foundations they rely on.
The field of Deep Learning is moving fast, breaking and setting new records in each and every possible metric exists. And as it evolves, it creates new fundamental concepts, allowing new architectures and concepts never seen before.
While I tend to assume all modern ML-practitioners are familiar with the basics fundamentals, such as CNN, RNN, LSTM and GAN, some of the newer ones are occasionally missed or left out. And so, this blogpost will discuss the new fundamentals — six papers I believe everyone in this field today should be familiar with.