Linear feature
scaling comprises three different operations on variable
spaces: translation (shifting), scaling
(compression/stretching) and rotation (orientation change).
They are generally used in machine learning for adjusting distributions
or metric units concerning central tendency (translation) and/or
variance (scaling)
Linear feature scaling don’t change the main properties of the
original distribution, e.g.:
- multi-modal distributions remain multi-modal
- skewed distributions won’t become symmetric
- non-normal frequency distributions stay non-normal
- single or double bounded scales just getting new boundaries
- singular covariance matrix stays singular
Thus, methodological or algebraic limitation will be
preserved, even signs or scales are changing!
This short video visualizes linear scaling: