Qualitative Research Techniques for Language Models: Conducting Semi-Structured Conversations with ChatGPT and BARD in Computer Science Education

Qualitative Research Techniques for Language Models: Conducting Semi-Structured Conversations with ChatGPT and BARD in Computer Science Education

Authors

Keywords:

Large Language Models, Qualitative Content Analysis, Contextual Factors

Abstract

In the era of artificial intelligence, large language models like ChatGPT and BARD find diverse applications, from language translation to text generation. This study explores a novel approach to qualitative research by interviewing these models, which house vast amounts of data and perspectives. We investigate the applicability of qualitative content analysis to interviews with ChatGPT (English and German) and BARD, focusing on the relevance of computer science in K-12 education. Our findings reveal that model responses heavily depend on context, with the same model yielding varying results for identical questions. From this, we derive guidelines for conducting and analyzing interviews with large language models. While qualitative content analysis methods can be applied, careful consideration of contextual factors is crucial. The guidelines provided support researchers and practitioners in conducting nuanced interviews. Overall, we advise against relying on interviews with large language models for research due to their unpredictable nature. Instead, we propose using these models as exploration tools to gain diverse perspectives on research topics and validate interview guidelines before real-world applications.

References

OpenAI. GPT-4 Technical Report. 2023. Available online: http://xxx.lanl.gov/abs/2303.08774 (accessed on 18 July 2023).

Google. Bard Experiment. 2023. Available online: https://bard.google.com (accessed on 18 July 2023).

Shen, Y.; Heacock, L.; Elias, J.; Hentel, K.D.; Reig, B.; Shih, G.; Moy, L. ChatGPT and other large language models are double-edged swords. Radiology 2023, 307, e230163. [CrossRef] [PubMed]

Rillig, M.C.; Ågerstrand, M.; Bi, M.; Gould, K.A.; Sauerland, U. Risks and benefits of large language models for the environment.

Environ. Sci. Technol. 2023, 57, 3464–3466. [CrossRef] [PubMed]

Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need. In Proceedings of the Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, Long Beach, CA, USA, 4–9 December 2017; Volume 30.

Devlin, J.; Chang, M.W.; Lee, K.; Toutanova, K. Bert: Pre-training of deep bidirectional transformers for language understanding.

arXiv 2018, arXiv:1810.04805.

Radford, A.; Narasimhan, K.; Salimans, T.; Sutskever, I. Improving Language Understanding by Generative Pre-Training. 2018. Available online: https://www.mikecaptain.com/resources/pdf/GPT-1.pdf (accessed on 18 July 2023).

Zhu, Y.; Kiros, R.; Zemel, R.; Salakhutdinov, R.; Urtasun, R.; Torralba, A.; Fidler, S. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 19–27.

Brown, T.; Mann, B.; Ryder, N.; Subbiah, M.; Kaplan, J.D.; Dhariwal, P.; Neelakantan, A.; Shyam, P.; Sastry, G.; Askell, A.; et al. Language models are few-shot learners. Adv. Neural Inf. Process. Syst. 2020, 33, 1877–1901.

Sobieszek, A.; Price, T. Playing games with AIs: The limits of GPT-3 and similar large language models. Minds Mach. 2022,

, 341–364. [CrossRef]

Touvron, H.; Lavril, T.; Izacard, G.; Martinet, X.; Lachaux, M.A.; Lacroix, T.; Rozière, B.; Goyal, N.; Hambro, E.; Azhar, F.; et al. Llama: Open and efficient foundation language models. arXiv 2023, arXiv:2302.13971.

Thoppilan, R.; De Freitas, D.; Hall, J.; Shazeer, N.; Kulshreshtha, A.; Cheng, H.T.; Jin, A.; Bos, T.; Baker, L.; Du, Y.; et al. Lamda: Language models for dialog applications. arXiv 2022, arXiv:2201.08239.

Chowdhery, A.; Narang, S.; Devlin, J.; Bosma, M.; Mishra, G.; Roberts, A.; Barham, P.; Chung, H.W.; Sutton, C.; Gehrmann, S.; et al. Palm: Scaling language modeling with pathways. arXiv 2022, arXiv:2204.02311.

Oppong, S.H. The problem of sampling in qualitative research. Asian J. Manag. Sci. Educ. 2013, 2, 202–210.

Mruck, K.; Breuer, F. Subjectivity and reflexivity in qualitative research—A new FQS issue. Hist. Soc. Res./Hist. Sozialforschung

, 28, 189–212.

Chenail, R.J. Interviewing the investigator: Strategies for addressing instrumentation and researcher bias concerns in qualitative research. Qual. Rep. 2011, 16, 255–262.

Higginbottom, G.M.A. Sampling issues in qualitative research. Nurse Res. 2004, 12, 7. [CrossRef] [PubMed]

Whittemore, R.; Chase, S.K.; Mandle, C.L. Validity in qualitative research. Qual. Health Res. 2001, 11, 522–537. [CrossRef] [PubMed]

Thomson, S.B. Qualitative research: Validity. Joaag 2011, 6, 77–82.

Mayring, P. Qualitative content analysis. Companion Qual. Res. 2004, 1, 159–176.

Surmiak, A.D. Confidentiality in Qualitative Research Involving Vulnerable Participants: Researchers’ Perspectives; Forum Qualitative Sozialforschung/Forum: Qualitative Social Research (FQS): Berlin, Germany, 2018; Volume 19.

Laï, M.C.; Brian, M.; Mamzer, M.F. Perceptions of artificial intelligence in healthcare: Findings from a qualitative survey study among actors in France. J. Transl. Med. 2020, 18, 14. [CrossRef] [PubMed]

Haan, M.; Ongena, Y.P.; Hommes, S.; Kwee, T.C.; Yakar, D. A qualitative study to understand patient perspective on the use of artificial intelligence in radiology. J. Am. Coll. Radiol. 2019, 16, 1416–1419. [CrossRef] [PubMed]

Yang, Y.; Siau, K.L. A Qualitative Research on Marketing and Sales in the Artificial Intelligence Age. Available online: https://www.researchgate.net/profile/Keng-Siau-2/publication/325934359_A_Qualitative_Research_on_Marketing_and_ Sales_in_the_Artificial_Intelligence_Age/links/5b9733644585153a532634e3/A-Qualitative-Research-on-Marketing-and-Sales- in-the-Artificial-Intelligence-Age.pdf (accessed on 18 July 2023).

Longo, L. Empowering qualitative research methods in education with artificial intelligence. In World Conference on Qualitative Research; Springer: Cham, Switzerland, 2019; pp. 1–21.

Downloads

Published

2023-04-21

How to Cite

Mason, T. (2023). Qualitative Research Techniques for Language Models: Conducting Semi-Structured Conversations with ChatGPT and BARD in Computer Science Education. Infotech Journal Scientific and Academic , 4(1), 219–243. Retrieved from https://infotechjournal.org/index.php/infotech/article/view/27

Issue

Section

Articles
Loading...