I started following Deep Learning Curriculum written by Jacob Hilton and here is what I learnt from the exercise in Topic 1 - Transformer. My solution is written in Colab T1-Transformers-solution.ipynb

It took me around 20 hours to finish the exercise and it totally worth it. Throughout the process I learnt:

  1. How to implement the transformer model end-to-end.
  2. How to gather and clean the data for transformer model
  3. How to implement positional embedding, Attention, FNN, Residual Connection and put all of them together into transformer model.
  4. Switching between LayerNorm(x + SubLayer(x)) and x + SubLayer(LayerNorm(x) doesn’t affect model performance.
  5. How to program in Pytorch more fluently and gathered a bunch of utility function for later usage.
  6. How to debug the model by using gradient flow and torchviz.make_dot to check model structure clearly.