Having specified the factor model, we want to know how much of the variability in \(\mathbf X\), given by the covariance matrix \(\Sigma\), where \(\Sigma=\text{Cov}(\mathbf X) = (\mathbf X-\mu)(\mathbf X-\mu)^T\), is explained by the factor model.

Suppose there are \(p\) original variables, to be explained by \(m\) factors (\(m<p\)). Factor analysis decomposes the \(p \times p\) **variance-covariance matrix** \(\Sigma\) of the original variables \(\mathbf X\) into a \(p \times m\) **loadings matrix** \(\Lambda\), where \(\Lambda= \text{Cov}(XF)\), and a \(p \times p\) diagonal matrix of unexplained variances per original variable, \(\Psi\), where \(\Psi = \text{Cov}(\mathbf e)\), such that

\[\Sigma = \Lambda\Lambda^T+\Psi\]

This equation indicates that we know the variability in \(\mathbf X\), given by \(\Sigma\), if we know the **loadings matrix** \(\Lambda\) and the diagonal matrix of unexplained variances \(\Psi\). Thus, more conceptual, we explain the \(\Sigma\) by two terms. The first term, the loadings matrix \(\Lambda\), gives the coefficients \((\lambda_{jm})\) that relate the factors \((F_{jm})\) to each particular observations \((x_j)\). These coefficients, as we will discuss in the subsequent sections, may be estimated given observational data. Consequently, the term \(\Lambda\Lambda^T\) corresponds to the variability, which may be explained by the factors. This proportion of the overall variability, explained be a liner combination of factors, is denoted as **communality**. In contrast, the proportion of variability, which can not be explained by a linear combination of the factors, given by the term \(\Psi\) is denoted as **uniqueness**.

\[\Sigma = \underbrace{\Lambda\Lambda^T}_{\text{communality}} + \underbrace{\Psi}_{\text{uniqueness}}\]