Deep Learning - Activation and Loss Functions
Sigmoid, ReLU, GELU Tanh, Mean Squared Error, Mean Absolute Error, Cross Entropy Loss, Hinge Loss, Huber Loss, IoU Loss, Dice Loss, Focal Loss, Cauchy Robust Kernel
Activation Functions
Early papers found out that Rectified Linear Unit (ReLu) is always faster than Sigmoid because of its larger derivatives, and non-zero derivatives at positive regions.
Howeve...
Posted by Rico's Nerd Cluster on January 8, 2022