Quartiles divide a ranked data set into four equal parts. These three measures are denoted first quartile (denoted by $Q1$), second quartile (denoted by $Q2$) and third quartile (denoted by $Q3$). The second quartile is the same as the median of a data set. The first quartile is the value of the middle term among the observations that are less than the median and the third quartile is the value of the middle term among the observations that are greater than the median (Mann 2012).
# First, let's import all the needed libraries.
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import scipy.stats as stats
cmap = plt.get_cmap("YlOrBr", 4)
cmap(4)
(0.4, 0.1450980392156863, 0.02352941176470588, 1.0)
## hide cell
# Stacked grouped bar chart
plt.barh([1], 4, height=1.2, align="edge", color=cmap(3), edgecolor="black")
plt.barh([1], 3, height=1.2, align="edge", color=cmap(2), edgecolor="black")
plt.barh([1], 2, height=1.2, align="edge", color=cmap(1), edgecolor="black")
plt.barh([1], 1, height=1.2, align="edge", color=cmap(0), edgecolor="black")
plt.ylim(0, 4)
plt.xlim(0, 4)
plt.text(0.5, 1.5, "25%", fontsize=15)
plt.text(1.5, 1.5, "25%", fontsize=15)
plt.text(2.5, 1.5, "25%", fontsize=15)
plt.text(3.5, 1.5, "25%", fontsize=15)
plt.text(0.9, 0.7, "Q1", fontsize=15)
plt.text(1.9, 0.7, "Q2", fontsize=15)
plt.text(2.9, 0.7, "Q3", fontsize=15)
plt.arrow(
2,
3,
0,
-0.8,
length_includes_head=True,
head_width=0.15,
head_length=0.25,
color="black",
)
plt.text(1.7, 3.2, "Median", fontsize=15)
plt.axis("off")
plt.show()
Approximately 25 % of the values in a ranked data set are less than $Q1$ and about 75 % are greater than $Q1$ The second quartile, $Q2$, divides a ranked data set into two equal parts; hence, the second quartile and the median are the same. Approximately 75 % of the data values are less than $Q3$ and about 25 % are greater than $Q3$. The difference between the third quartile and the first quartile of a data set is called the interquartile range ($IQR$) (Mann 2012).
$$ IQR = Q3-Q1$$Let us test Pythons functionality for computing quantiles/quartiles.
We will use the nc.score
variable of the students
data set to calculate quartiles and the $IQR$.
The nc.score
variable corresponds to the Numerus Clausus score of each particular student.
First, we subset the data and plot a histogram to further inspect the variable's distribution.
students = pd.read_csv(
"https://userpage.fu-berlin.de/soga/200/2010_data_sets/students.csv"
)
nc_score = students["nc.score"]
plt.hist(nc_score, bins="sturges", color="lightgrey", edgecolor="grey")
plt.title("Histogram of NC score")
plt.xlabel("nc")
plt.ylabel("Frequency")
plt.show()
To calculate the quartiles for the nc_score
variable, we apply the function np.percentile()
.
If you call the help()
function on np.percentile
, you see that the values for the argument q
are set to be between 0 and 100.
Thus, in order to calculate the quartiles for the nc_score
variable we just write:
help(np.percentile)
Help on function percentile in module numpy: percentile(a, q, axis=None, out=None, overwrite_input=False, method='linear', keepdims=False, *, interpolation=None) Compute the q-th percentile of the data along the specified axis. Returns the q-th percentile(s) of the array elements. Parameters ---------- a : array_like Input array or object that can be converted to an array. q : array_like of float Percentile or sequence of percentiles to compute, which must be between 0 and 100 inclusive. axis : {int, tuple of int, None}, optional Axis or axes along which the percentiles are computed. The default is to compute the percentile(s) along a flattened version of the array. .. versionchanged:: 1.9.0 A tuple of axes is supported out : ndarray, optional Alternative output array in which to place the result. It must have the same shape and buffer length as the expected output, but the type (of the output) will be cast if necessary. overwrite_input : bool, optional If True, then allow the input array `a` to be modified by intermediate calculations, to save memory. In this case, the contents of the input `a` after this function completes is undefined. method : str, optional This parameter specifies the method to use for estimating the percentile. There are many different methods, some unique to NumPy. See the notes for explanation. The options sorted by their R type as summarized in the H&F paper [1]_ are: 1. 'inverted_cdf' 2. 'averaged_inverted_cdf' 3. 'closest_observation' 4. 'interpolated_inverted_cdf' 5. 'hazen' 6. 'weibull' 7. 'linear' (default) 8. 'median_unbiased' 9. 'normal_unbiased' The first three methods are discontiuous. NumPy further defines the following discontinuous variations of the default 'linear' (7.) option: * 'lower' * 'higher', * 'midpoint' * 'nearest' .. versionchanged:: 1.22.0 This argument was previously called "interpolation" and only offered the "linear" default and last four options. keepdims : bool, optional If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the original array `a`. .. versionadded:: 1.9.0 interpolation : str, optional Deprecated name for the method keyword argument. .. deprecated:: 1.22.0 Returns ------- percentile : scalar or ndarray If `q` is a single percentile and `axis=None`, then the result is a scalar. If multiple percentiles are given, first axis of the result corresponds to the percentiles. The other axes are the axes that remain after the reduction of `a`. If the input contains integers or floats smaller than ``float64``, the output data-type is ``float64``. Otherwise, the output data-type is the same as that of the input. If `out` is specified, that array is returned instead. See Also -------- mean median : equivalent to ``percentile(..., 50)`` nanpercentile quantile : equivalent to percentile, except q in the range [0, 1]. Notes ----- Given a vector ``V`` of length ``N``, the q-th percentile of ``V`` is the value ``q/100`` of the way from the minimum to the maximum in a sorted copy of ``V``. The values and distances of the two nearest neighbors as well as the `method` parameter will determine the percentile if the normalized ranking does not match the location of ``q`` exactly. This function is the same as the median if ``q=50``, the same as the minimum if ``q=0`` and the same as the maximum if ``q=100``. This optional `method` parameter specifies the method to use when the desired quantile lies between two data points ``i < j``. If ``g`` is the fractional part of the index surrounded by ``i`` and alpha and beta are correction constants modifying i and j. Below, 'q' is the quantile value, 'n' is the sample size and alpha and beta are constants. The following formula gives an interpolation "i + g" of where the quantile would be in the sorted sample. With 'i' being the floor and 'g' the fractional part of the result. .. math:: i + g = (q - alpha) / ( n - alpha - beta + 1 ) The different methods then work as follows inverted_cdf: method 1 of H&F [1]_. This method gives discontinuous results: * if g > 0 ; then take j * if g = 0 ; then take i averaged_inverted_cdf: method 2 of H&F [1]_. This method give discontinuous results: * if g > 0 ; then take j * if g = 0 ; then average between bounds closest_observation: method 3 of H&F [1]_. This method give discontinuous results: * if g > 0 ; then take j * if g = 0 and index is odd ; then take j * if g = 0 and index is even ; then take i interpolated_inverted_cdf: method 4 of H&F [1]_. This method give continuous results using: * alpha = 0 * beta = 1 hazen: method 5 of H&F [1]_. This method give continuous results using: * alpha = 1/2 * beta = 1/2 weibull: method 6 of H&F [1]_. This method give continuous results using: * alpha = 0 * beta = 0 linear: method 7 of H&F [1]_. This method give continuous results using: * alpha = 1 * beta = 1 median_unbiased: method 8 of H&F [1]_. This method is probably the best method if the sample distribution function is unknown (see reference). This method give continuous results using: * alpha = 1/3 * beta = 1/3 normal_unbiased: method 9 of H&F [1]_. This method is probably the best method if the sample distribution function is known to be normal. This method give continuous results using: * alpha = 3/8 * beta = 3/8 lower: NumPy method kept for backwards compatibility. Takes ``i`` as the interpolation point. higher: NumPy method kept for backwards compatibility. Takes ``j`` as the interpolation point. nearest: NumPy method kept for backwards compatibility. Takes ``i`` or ``j``, whichever is nearest. midpoint: NumPy method kept for backwards compatibility. Uses ``(i + j) / 2``. Examples -------- >>> a = np.array([[10, 7, 4], [3, 2, 1]]) >>> a array([[10, 7, 4], [ 3, 2, 1]]) >>> np.percentile(a, 50) 3.5 >>> np.percentile(a, 50, axis=0) array([6.5, 4.5, 2.5]) >>> np.percentile(a, 50, axis=1) array([7., 2.]) >>> np.percentile(a, 50, axis=1, keepdims=True) array([[7.], [2.]]) >>> m = np.percentile(a, 50, axis=0) >>> out = np.zeros_like(m) >>> np.percentile(a, 50, axis=0, out=out) array([6.5, 4.5, 2.5]) >>> m array([6.5, 4.5, 2.5]) >>> b = a.copy() >>> np.percentile(b, 50, axis=1, overwrite_input=True) array([7., 2.]) >>> assert not np.all(a == b) The different methods can be visualized graphically: .. plot:: import matplotlib.pyplot as plt a = np.arange(4) p = np.linspace(0, 100, 6001) ax = plt.gca() lines = [ ('linear', '-', 'C0'), ('inverted_cdf', ':', 'C1'), # Almost the same as `inverted_cdf`: ('averaged_inverted_cdf', '-.', 'C1'), ('closest_observation', ':', 'C2'), ('interpolated_inverted_cdf', '--', 'C1'), ('hazen', '--', 'C3'), ('weibull', '-.', 'C4'), ('median_unbiased', '--', 'C5'), ('normal_unbiased', '-.', 'C6'), ] for method, style, color in lines: ax.plot( p, np.percentile(a, p, method=method), label=method, linestyle=style, color=color) ax.set( title='Percentiles for different methods and data: ' + str(a), xlabel='Percentile', ylabel='Estimated percentile value', yticks=a) ax.legend() plt.show() References ---------- .. [1] R. J. Hyndman and Y. Fan, "Sample quantiles in statistical packages," The American Statistician, 50(4), pp. 361-365, 1996
np.percentile(nc_score, [0, 25, 50, 75, 100])
array([1. , 1.46, 2.04, 2.78, 4. ])
Note: Not all statisticians define quartiles in exactly the same way.
For a detailed discussion of the different methods for computing quartiles, see e.g. the online article "Quartiles in Elementary Statistics" by E. Langford (2006).
In order to calculate the $IQR$ for the nc_score
variable we either write...
nc_score_quart = np.percentile(nc_score, [0, 25, 50, 75, 100])
nc_score_quart[3] - nc_score_quart[1]
1.3199999999999998
...or we apply the in-built function iqr()
that is included in the statistics library scipy.stats.
stats.iqr(nc_score_quart)
1.3199999999999998
We can visualize the partitioning of the nc_score
variable into quartiles by plotting a histogram and by adding a couple of additional lines of code.
ax = nc_score.plot.hist(bins=50, density=1, edgecolor="black", figsize=(10, 5))
for bar in ax.containers[0]:
# get x midpoint of bar
x = bar.get_x() + 0.5 * bar.get_width()
# set bar color based on x
if x < nc_score_quart[0]:
bar.set_color("blue")
bar.set_edgecolor("grey")
elif x < nc_score_quart[1]:
bar.set_color("blue")
bar.set_edgecolor("grey")
elif x < nc_score_quart[2]:
bar.set_color("red")
bar.set_edgecolor("grey")
elif x < nc_score_quart[3]:
bar.set_color("green")
bar.set_edgecolor("grey")
elif x < nc_score_quart[4]:
bar.set_color("black")
bar.set_edgecolor("grey")
else:
bar.set_color("grey")
plt.title("Quartiles")
plt.ylabel("Density")
plt.xlabel("Numerus Clausus score")
plt.text(4, 0.6, "1st", color="blue")
plt.text(4, 0.55, "2nd", color="red")
plt.text(4, 0.5, "3rd", color="green")
plt.text(4, 0.45, "4th", color="black")
plt.show()
Citation
The E-Learning project SOGA-Py was developed at the Department of Earth Sciences by Annette Rudolph, Joachim Krois and Kai Hartmann. You can reach us via mail by soga[at]zedat.fu-berlin.de.
Please cite as follow: Rudolph, A., Krois, J., Hartmann, K. (2023): Statistics and Geodata Analysis using Python (SOGA-Py). Department of Earth Sciences, Freie Universitaet Berlin.