A hypothesis test for a population mean when the population standard deviation,$\sigma$, is unknown is conducted in the same way as if the population standard deviation is known. The only difference is that the t-distribution is invoked instead of the standard normal distribution ($z$-distribution).
For a test with the null hypothesis $H_{0}: \mu = \mu_{0}$, the test statistic, $t$, is calculated as:
$$t = \frac {\bar {x} - \mu_{0}} {s\ /\ \sqrt {n}}$$This hypothesis testing procedure is called one-mean t-test or simply t-test. Recall that hypothesis tests follow a step-wise procedure, which is summarized as follows:
Similar to the preceding section, we showcase the critical value approach first and then, in a second step, we repeat the analysis using the p-value approach. However, we wrap the critical value approach up within a self-built function this time. For the p-value approach, we will use the powerful machinery of Python and apply the stats.ttest_1samp()
function from the scipy
package.
Let us implement the critical value approach by building a UDF called simple_t_test()
. The function takes as input arguments:
pandas
series
object called data
).mu0
)).alpha
)).left
, right
or two_sided
as the default value).The function's output is a boolean value, which indicates whether the null hypothesis $H_{0}$ shall be rejected based on the test result. If TRUE
, $H_{0}$ is rejected, if FALSE
, $H_{0}$ is not rejected.
import pandas as pd
import numpy as np
students = pd.read_csv("https://userpage.fu-berlin.de/soga/data/raw-data/students.csv")
students["height"].size
8239
import numpy as np
from scipy.stats import t
def simple_t_test(data, mu0, alpha, method = "two_sided"):
sample_mean = np.mean(data)
sample_std = np.std(data)
empirical_t = (sample_mean - mu0) / (sample_std / np.sqrt(data.size))
df = data.size - 1
# perform left-tailed test
if (method == "left"):
critical_value = t.ppf(alpha, df = df)
# test decision
if (empirical_t < critical_value):
reject = True
else:
reject = False
# perform right-tailed test
elif (method == "right"):
critical_value = t.ppf(1 - alpha, df = df)
if (empirical_t > critical_value):
reject = True
else:
reject = False
# perform two-sided test
else:
critical_value = t.ppf(alpha / 2, df = df)
if ((-np.abs(empirical_t) < -np.abs(critical_value)) or
(np.abs(empirical_t) > np.abs(critical_value))):
reject = True
else:
reject = False
print("Significance level:", alpha)
print("Degrees of freedom:", df)
print("Test statistic:", round(empirical_t, 4))
print("Critical value:", round(critical_value, 4))
print("Reject H0:", reject)
return reject
A great piece of code :-)
Now it is time to test our simple_t_test()
function. Therefore, we redo the example from the previous section. We use the students
data set. You may download the students.csv
file here and import it from your local file system, or you load it directly as a web resource. In either case, you import the data set to python as pandas
dataframe
object by using the read_csv
method:
import pandas as pd
import numpy as np
students = pd.read_csv("https://userpage.fu-berlin.de/soga/data/raw-data/students.csv")
Note: Make sure the
numpy
,pandas
andscipy
packages are part of yourmamba
environment!
The students data set consists of 8239 rows, each representing a particular student, and 16 columns, each corresponding to a variable/feature related to that particular student. These self-explaining variables are:
We examine the average weight of students of a random sample of students from the students
data set and compare it to the average weight of European adults. Walpole et al. (2012) published data on the average body mass (kg) per region, including Europe. They report the average body mass for the European adult population to be 70.8 kg. We set the population mean accordingly: $\mu_{0} = 70.8$. Further, we take a random sample (sample_weights
) with a sample size of $n = 9$. The sample consists of the weights
in kg of $9$ randomly picked students from the students
data set.
mu_0 = 70.8
n = 9
sample_weights = students.sample(n, random_state = 9)["weight"]
Step 1: State the null hypothesis, $H_{0}$, and alternative hypothesis, $H_{0}$
The null hypothesis states that the average weight of students ($\mu$) equals the average weight of European adults of 70.8 kg ($\mu_{0}$) as reported by Walpole et al. (2012). In other words, there is no difference in the mean weight of students and the mean weight of European adults.
$$H_{0}:\ \ \ \mu = 70.8$$Recall, that the formulation of the alternative hypothesis dictates, whether we apply a two-sided, a left tailed or a right tailed hypothesis test. Thereover we state the following three alternative hypothesis:
$$H_{A_{1}}: \ \ \ \mu \ne 70.8$$results in a two-sided hypothesis test.
$$H_{A_{2}}: \ \ \ \mu < 70.8$$results in a left tailed hypothesis test.
$$H_{A_{3}}: \ \ \ \mu > 70.8$$results in a right tailed hypothesis test.
Step 2: Decide on the significance level, $\alpha$.
$$\alpha = 0.05$$alpha = 0.05
Steps 3, 4 and 5: Compute the value of the test statistic, determine the critical value and evaluate the value of the test statistic. If it falls in the rejection region, reject $H_{0}$; otherwise, do not reject $H_{0}$.
Now, our self-built function simple_t_test()
comes into play. We feed to the function a random sample in the form of a pandas
series
, a value for $\mu_{0}$, a significance level $\alpha$ and the method (two_sided
, left
or right
). Recall that two_sided
is the default value. Thus, if we do not specify any method, the function will apply the two-sided hypothesis test:
simple_t_test(sample_weights, mu_0, alpha)
Significance level: 0.05 Degrees of freedom: 8 Test statistic: 1.2325 Critical value: -2.306 Reject H0: False
False
Performing a left-tailed test by:
simple_t_test(sample_weights, mu_0, alpha, method = "left")
Significance level: 0.05 Degrees of freedom: 8 Test statistic: 1.2325 Critical value: -1.8595 Reject H0: False
False
Performing a right-tailed test by:
simple_t_test(sample_weights, mu_0, alpha, method = "right")
Significance level: 0.05 Degrees of freedom: 8 Test statistic: 1.2325 Critical value: 1.8595 Reject H0: False
False
Step 6: Interpret the result of the hypothesis test.
Suppose the test statistic and thus the sample mean falls beyond the critical value, meaning into the rejection region. In that case, we conclude that at the 5 % significance level, the data does provide sufficient evidence to reject $H_{0}$. In contrast, if the test statistic and thus the sample mean falls in the non-rejection region, we conclude that the data does not provide evidence to reject $H_{0}$.
The second approach is based on assigning a probability to the value of the test statistic. If the test statistic is very extreme, given that the null hypothesis is true, a low probability will be assigned to the test statistic. In contrast, if the test statistic is not extreme at all, the probability assigned to it will be much higher. That probability is called p-value.
In order to calculate the exact p-value for a given numeric value, we rely on software. The alternative “look-up-in-table-method” is somehow tedious as not all numerical values of the test statistic are given in such a table, and this causes rounding errors. Luckily, Python provides potent machinery to perform t-tests by the ttest_1samp()
function over the stats
module from the scipy
package. The function empowers us testing one population mean when $\sigma$ is unknown in a straightforward way. Let's have a look at the function's usage by redoing the same problem as above, but this time by using the ttest_1samp()
function:
from scipy import stats
test_result = stats.ttest_1samp(sample_weights,
mu_0,
alternative = "two-sided")
test_result
TtestResult(statistic=1.1620080498483334, pvalue=0.2787240885983498, df=8)
The ttest_1samp()
function returns an object
that consists of the following properties:
<object>.statistic
holds the actual teststatic and represents the empirical t-value.<object>.pvalue
represents the p-value of the performed t-test.<object>.df
represents the degrees of freedom based on the observations provided within the input data set.You retrieve the object properties by:
test_result.statistic
1.1620080498483334
Alternativly you can retrieve the value of the teststatistic over indexing:
test_result[0]
1.1620080498483334
Accordingly the p-value and the degress of freedoms are retrieved over:
print("p-value =", test_result.pvalue)
print("p-value =", test_result[1])
print("degrees of freedom =", test_result.df)
p-value = 0.2787240885983498 p-value = 0.2787240885983498 degrees of freedom = 8
Furthermore, the returned object
allows calculating the confidence interval based on a given confidence level over the <object>.confidence_interval()
function. If no argument is present, the confidence interval to $\alpha = 5\ \%$ is returned:
test_result.confidence_interval()
ConfidenceInterval(low=68.25124076011824, high=78.52653701765954)
Providing a prettier output:
print("With 95 % confidence the true mean is within the interval of",
round(test_result.confidence_interval()[0], 2), "and",
round(test_result.confidence_interval()[1], 2), "kg.")
With 95 % confidence the true mean is within the interval of 68.25 and 78.53 kg.
Citation
The E-Learning project SOGA-Py was developed at the Department of Earth Sciences by Annette Rudolph, Joachim Krois and Kai Hartmann. You can reach us via mail by soga[at]zedat.fu-berlin.de.
Please cite as follow: Rudolph, A., Krois, J., Hartmann, K. (2023): Statistics and Geodata Analysis using Python (SOGA-Py). Department of Earth Sciences, Freie Universitaet Berlin.