L1/L2 Loss

An \(L_n\) norm between a input \(x\) and a target \(y\) is defined as \[ L_n(x-y)=\sum_i |x_i-y_i|^n \] All \(L_n\) norms have the same minimum \(x=y\), however they might behave slightly differently behaviors when a global optima cannot be reached. For example, if all x_i are the same value then the optimal \(L_2\) norm is the mean, while the \(L_1\) norm is the median.

Both loss functions are used as loss functions in regression tasks, however L2 loss is less robust to outliers because the squared errors are much larger than errors in absolute value.

PyTorch Usage

L1 Loss

>>> loss = nn.L1Loss()
>>> input = torch.randn(3, 5, requires_grad=True)
>>> target = torch.randn(3, 5)
>>> output = loss(input, target)

L2 Loss

>>> loss = nn.MSELoss()
>>> input = torch.randn(3, 5, requires_grad=True)
>>> target = torch.randn(3, 5)
>>> output = loss(input, target)

credit: here

See torch.nn.L1Loss and torch.nn.MSELoss for more details.