Deep Learning - Transformer Series 1 - Embedding Pre-Processing
Positional Encoding, Padding Mask, Look-ahead Mask, Tokenization
What is Positional Encoding
In natural languange processing, it’s common to have
1
sentence ("I love ice cream") -> token ("I", "love", "ice", "cream") -> embedding(100, 104, 203, 301) ->...