References
Bachl, M., & Scharkow, M. (2024). Computational text
analysis. OSF. https://doi.org/10.31219/osf.io/3yhu8
Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S.
(2021). On the dangers of stochastic parrots: Can language
models be too big? 🦜. Proceedings of the 2021 ACM
Conference on Fairness,
Accountability, and Transparency,
610–623. https://doi.org/gh677h
Chae, Y., & Davidson, T. (2025). Large language models for text
classification: From zero-shot learning to
instruction-tuning. Sociological Methods & Research. https://doi.org/g9pqfk
Gilardi, F., Alizadeh, M., & Kubli, M. (2023). ChatGPT
outperforms crowd workers for text-annotation tasks. Proceedings of
the National Academy of Sciences, 120(30), e2305016120. https://doi.org/gsqx5m
Heseltine, M., & Clemm von Hohenberg, B. (2024). Large language
models as a substitute for human experts in annotating political text.
Research & Politics, 11(1). https://doi.org/gtkhqr
Kathirgamalingam, A., Lind, F., Bernhard, J., & Boomgaarden, H. G.
(2024). Agree to disagree? Human and LLM
coder bias for constructs of marginalization. OSF. https://doi.org/10.31235/osf.io/agpyr
Krippendorff, K. (2019). Content analysis: An
introduction to its methodology (4th ed.). SAGE Publications, Inc.
https://doi.org/mmsp
Kroon, A., Welbers, K., Trilling, D., & Atteveldt, W. van. (2024).
Advancing automated content analysis for a new era of media effects
research: The key role of transfer learning.
Communication Methods and Measures, 18(2), 142–162. https://doi.org/gsv44t
Neuendorf, K. A. (2017). The content analysis guidebook. SAGE
Publications, Inc. https://doi.org/dz7p
Rathje, S., Mirea, D.-M., Sucholutsky, I., Marjieh, R., Robertson, C.
E., & Van Bavel, J. J. (2024). GPT is an effective tool
for multilingual psychological text analysis. Proceedings of the
National Academy of Sciences, 121(34), e2308950121. https://doi.org/gt7hrw
Spirling, A. (2023). Why open-source generative AI models
are an ethical way forward for science. Nature,
616(7957), 413–413. https://doi.org/gsqx6v
Stoll, A., Yu, J., Andrich, A., & Domahidi, E. (2025).
Classification bias of LLMs in detecting incivility towards
female and male politicians in German social media
discourse. Communication Methods and Measures. https://doi.org/g94g68
Stolwijk, S. B., Boukes, M., Yeung, W. N., Liao, Y., Münker, S., Kroon,
A. C., & Trilling, D. (2025). Can we use automated approaches to
measure the quality of online political discussion? How to
(not) measure interactivity, diversity, rationality, and incivility in
online comments to the news. Communication Methods and
Measures. https://doi.org/g93sqk
Stuhler, O., Ton, C. D., & Ollion, E. (2025). From codebooks to
promptbooks: Extracting information from text with
generative large language models. Sociological Methods &
Research. https://doi.org/g9vgnq
Törnberg, P. (2024a). Best practices for text annotation with large
language models. Sociologica, 18(2), 67–85. https://doi.org/g9vgm7
Törnberg, P. (2024b). Large language models outperform expert coders and
supervised classifiers at annotating political social media messages.
Social Science Computer Review. https://doi.org/g8nnfx
Van Atteveldt, W., Trilling, D., & Arcila Calderón, C. (2022).
Computational analysis of communication. Wiley Blackwell. https://v2.cssbook.net/
Widder, D. G., Whittaker, M., & West, S. M. (2024). Why
“open” AI systems are actually closed, and why
this matters. Nature, 635(8040), 827–833. https://doi.org/g8xdb3