<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.0 20120330//EN" "JATS-journalpublishing1.dtd">
<article article-type="research-article" xml:lang="EN" xmlns:xlink="http://www.w3.org/1999/xlink">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">LIBER</journal-id>
<journal-title-group>
<journal-title>LIBER QUARTERLY</journal-title>
</journal-title-group>
<issn pub-type="epub">2213-056X</issn>
<publisher>
<publisher-name>openjournals.nl</publisher-name>
<publisher-loc>The Hague, The Netherlands</publisher-loc>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="publisher-id">lq.18137</article-id>
<article-id pub-id-type="doi">10.53377/lq.18137</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Article</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>Artificial Intelligence in PhD Education: New Perspectives for Research Libraries</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<contrib-id contrib-id-type="orcid">https://orcid.org/0000-0002-5737-0008</contrib-id>
<name>
<surname>Grote</surname>
<given-names>Michael</given-names>
</name>
<xref ref-type="aff" rid="aff1"/>
</contrib>
<contrib contrib-type="author">
<contrib-id contrib-id-type="orcid">https://orcid.org/0000-0002-0196-9177</contrib-id>
<name>
<surname>Faber</surname>
<given-names>Hege Charlotte</given-names>
</name>
<xref ref-type="aff" rid="aff2"/>
</contrib>
<contrib contrib-type="author">
<contrib-id contrib-id-type="orcid">https://orcid.org/0000-0002-1910-0859</contrib-id>
<name>
<surname>Gasparini</surname>
<given-names>Andrea</given-names>
</name>
<xref ref-type="aff" rid="aff3"/>
</contrib>
<aff id="aff1">University of Bergen Library, Norway</aff>
<aff id="aff2">NTNU University Library, Trondheim, Norway (until Sept. 2023)</aff>
<aff id="aff3">University of Oslo Library, Norway</aff>
</contrib-group>
<pub-date pub-type="epub">
<month>06</month>
<year>2024</year>
</pub-date>
<volume>34</volume>
<fpage>1</fpage>
<lpage>29</lpage>
<permissions>
<copyright-statement>Copyright 2024, The copyright of this article remains with the author</copyright-statement>
<copyright-year>2024</copyright-year>
<license license-type="open-access" xlink:href="http://creativecommons.org/licenses/by/4.0/">
<license-p>This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International License (CC BY 4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. See <uri xlink:href="http://creativecommons.org/licenses/by/4.0/">http://creativecommons.org/licenses/by/4.0/</uri>.</license-p>
</license>
</permissions>
<self-uri xlink:href="http://www.liberquarterly.eu/article/10.53377/lq.18137"/>
<abstract>
<p>Artificial intelligence (AI) will drastically influence and change the working methods of scholars and researchers. This paper presents findings from a broad, national survey and a workshop focusing on the challenges and opportunities the advancement of AI poses for PhD candidates, seen from the perspective of library staff working with research support in a number of research libraries in Norway. The paper looks into how research libraries could adapt to the development, addresses the roles of various stakeholders and proposes measures regarding the support of PhD candidates in the responsible use of AI-based tools. Based on insights from the survey and the workshop, the paper also shows what is lacking in the libraries&#x2019; research support services concerning the understanding and utilisation of AI-based tools. The study reveals a degree of uncertainty among librarians about their role in the AI academic nexus. For the development of competences of teaching staff in academic libraries, the paper recommends to integrate AI-related topics into existing educational resources and to create arenas for sharing experiences and knowledge with relevant partners both within and outside the university.</p>
</abstract>
<kwd-group>
<kwd>research libraries</kwd>
<kwd>artificial intelligence (AI)</kwd>
<kwd>ethics</kwd>
<kwd>academic integrity</kwd>
<kwd>PhD on Track</kwd>
</kwd-group>
</article-meta>
</front>
<body>
<sec id="s1">
<title>1. Introduction</title>
<p>Academic libraries play a unique role in supporting researchers and students in their research efforts. They serve as protectors of knowledge and ensure the long-term validity of research findings. Lately, with the digitalisation of knowledge resources by alternative providers, new forms of knowledge access have started to emerge. Library resources are now widely available on various platforms in numerous countries and through different service providers. Given the complex landscape, the arrival of artificial intelligence (AI) has allowed the creation of a large variety of innovative solutions. We are experiencing artificial intelligence-based tools popping up all the time and all over the place, promising to make our research &#x2013; and the academic libraries&#x2019; research support services &#x2013; easier and more effective, but at the same moment putting us in danger of misusing the very same tools in different ways, intentional or not &#x2013; and thereby raising certain ethical concerns.</p>
<p>The range of tools spans from research support to generative solutions (<xref ref-type="bibr" rid="r6">Gasparini &#x0026; Kautonen, 2023</xref>). Following the release of Chat-GPT 3.5, academic libraries reacted with a mix of disbelief, uncertainty, and excitement (<xref ref-type="bibr" rid="r11">Lund &#x0026; Wang, 2023</xref>). Chat-GPT, a generative tool that creates a new text based on user queries, has paved the way for other powerful tools that probably will change various practices. On the one hand, they may emulate all phases of the academic writing workflow: Intelligent applications provide support for searching, analysing, and reviewing research literature, for the writing process and for assessment practices. On the other hand, the use, effects, and implications of this are still uncertain. As <xref ref-type="bibr" rid="r13">Miller (2019)</xref> points out, staff developing AI-based tools usually are the same ones who define what is a good explanation of how AI works. For academic libraries, the shift toward AI-based services is already in progress (<xref ref-type="bibr" rid="r11">Lund &#x0026; Wang, 2023</xref>). Technological innovations in the field of AI will drastically influence and change the working methods of scholars and researchers in the future. This paper looks into how academic libraries are responding to and adapting to this development, with a particular focus on their research support.</p>
<p>To gain insight into attitudes and practices regarding AI among research library staff, the authors conducted a workshop and a survey, both with a thematic focus on PhD education in Norway. The research question for the study was to describe the challenges and opportunities that the advancement of AI poses for PhD candidates, seen from the perspective of library staff working with research support in Norway. Furthermore, we aimed to investigate how library staff describes the role of academic libraries in this process.</p>
</sec>
<sec id="s2">
<title>2. Method</title>
<sec id="s2a">
<title>2.1. Workshop and Survey in the Context of Academic Libraries&#x2019; PhD Support</title>
<p>The authors organised a workshop titled &#x201C;Artificial intelligence as a topic for PhD learning resources like PhD on Track&#x201D;, held during the annual seminar for &#x201C;The Libraries&#x2019; Network for PhD Support&#x201D; in April 2023 at Oslo Metropolitan University (OsloMET). The national network is hosted by the editorial board of PhD on Track, an online guide and learning resource for PhD candidates from all academic fields, offering guidance on good research practices and addressing various topics relevant to the research workflow, like reviewing literature, publishing issues and data management. The non-profit website PhD on Track is a well-integrated component of the existing research support infrastructure in Norway. It serves as a pertinent example of an online learning resource utilised by both PhD candidates and library staff. Two of the authors are former editorial board members of PhD on Track, and one is a current editor of the website. Consequently, the article&#x2019;s perspective is not intended to provide an objective evaluation of the resource, but documenting experiences with and providing input for this service.</p>
<p>The workshop was conducted as a blended event, with in total 75 participants, in the majority of Norwegian public universities and university colleges. Together those institutions represent a broad range of academic fields, from the humanities and social sciences to STEM (science, technology, engineering and medicine), and some of the institutions also include music, architecture and design, and fine art. The participants were divided into physical groups of 5&#x2013;6 persons and one online group of 5 persons. In Norway, a great number of the library staff working with research support have a PhD themselves. Their PhD dissertations may have been written in any subject area, spanning from art history or philosophy via social sciences to engineering sciences. Some of the library employees work part time in the library, and part time as associate professors or lecturers at a department.</p>
<p>Prior to the workshop, the authors conducted a survey among the &#x201C;Libraries&#x2019; Network for PhD Support&#x201D; to explore attitudes, experiences, and practices related to artificial intelligence among library staff in Norway (March/April 2023). Based on the survey, the workshop addressed opportunities and challenges related to AI-based services and tools. In addition, the workshop focused on possibilities for incorporating AI as a topic on relevant pages of PhD on Track (especially the pages on literature mapping, literature search, academic writing, co-authorship, and copyright).</p>
<p>The workshop used the design method &#x201C;Six Thinking Hats&#x201D; (<xref ref-type="bibr" rid="r14">Payette &#x0026; Barnes, 2017</xref>) to discuss the following issues and questions:</p>
<list list-type="bullet">
<list-item><p>How does the use of AI-based tools affect the integrity of research?</p></list-item>
<list-item><p>How should PhD education reflect on AI with regard to academic integrity?</p></list-item>
<list-item><p>Where and how can PhD candidates and researchers find reliable information about AI-based services?</p></list-item>
<list-item><p>What kind of information and advice can/should be integrated into PhD on Track?</p></list-item>
<list-item><p>How can PhD on Track &#x2013; and universities and academic libraries in general &#x2013; address ethical issues that arise in connection with the increasing availability and use of AI-based tools?</p></list-item>
</list>
<p>The Six Thinking Hats method (<xref ref-type="bibr" rid="r14">Payette &#x0026; Barnes, 2017</xref>) was employed to address various issues, by providing six different ways to address a complex problem, considering possible solutions, and tracking progress throughout the thinking process. The systematic approach is represented by the different hats that are metaphorically &#x201C;worn&#x201D; by the participants. The hats have the following colours: Red, green, yellow, black, white and blue. The red hat represents the instincts, and might be said to stand for possible felt dangers and challenges, green stands for the creative thinker, and represents possibilities and benefits, yellow is the optimist, black is the Devil&#x2019;s advocate, addressing everything that might go wrong in any process, white stands for the voice of reason, i.e. the objective and logical way of thinking, while the blue hat would be the hat of a conductor, trying to make the best out of all the ideas emerging when wearing the other hats. However, given the restricted time slot and the large number of participants, we chose to adapt the concept and simplified the approach by using only three hats, namely the red, the green and the yellow one. Thus, we not only managed to use the allotted time in an effective way, but also accomplished to give participants using the method an additional association to help them think about the theme from different perspectives, namely the association of traffic lights &#x2013; red alert, yellow for wait &#x2013; and green for &#x201C;go ahead&#x201D;. The first hat, red, addressed challenges, and in the workshop, the scope was on risks and concerns related to the use of AI-based tools. The second hat, yellow, represented positive values, and the workshop focused on experiences, expectations, and wishes regarding the development of AI. The third and final hat, green, addressed creative thinking and, in the workshop, new ideas, possibilities, and concepts for the use of AI in research. Adapting the method by narrowing down the number of hats has also been done by others (<xref ref-type="bibr" rid="r4">Eleni &#x0026; Fotini, 2018</xref>; <xref ref-type="bibr" rid="r7">Grierson, 2017</xref>; <xref ref-type="bibr" rid="r12">McDonald, 2020</xref>; <xref ref-type="bibr" rid="r15">Sangaran Kutty &#x0026; Eileen, 2016</xref>).</p>
<p>Thankfully, groups were easily formed in the auditorium, and each group was provided with a set of coloured cardboard, while the online group used coloured Google documents for their inputs.</p>
</sec>
<sec id="s2b">
<title>2.2. Methodologies for the Analysis of the Data</title>
<p>The reflexive Thematic Analysis was chosen for analysing the workshop data as it indicates the context and the given situation as essential, acknowledging &#x201C;researcher subjectivity as not just valid but a resource&#x201D; (Braun &#x0026; Clarke, 2018). Thematic analysis is a method allowing patterns to emerge from qualitative data (<xref ref-type="bibr" rid="r1">Braun &#x0026; Clarke, 2006</xref>, <xref ref-type="bibr" rid="r2">2012</xref>; <xref ref-type="bibr" rid="r3">Braun et al., 2018</xref>). The analysis of the data (approximately 2,000 words) followed the six steps described by <xref ref-type="bibr" rid="r1">Braun and Clarke (2006)</xref>, with some minor adjustments.</p>
<p>As a first and second step, the authors familiarised themselves with the data, and each of them worked with the output of one of the coloured hats, leading to the establishment of preliminary codes. One of the authors used the word frequency search provided by the qualitative data analysis program NVivo to make sure that important codes were not overlooked. In the third step, each author independently grouped codes into potential themes.</p>
<p>The fourth step was adapted to some degree for the given context, including an adapted form of the affinity mapping method. The goal was to review the themes in relation to the coded extracts, as suggested by <xref ref-type="bibr" rid="r1">Braun and Clarke (2006)</xref>. In this case, the authors opted to present to each other the themes and to discuss their relevance to the collected data. The fifth step was also done <italic>in plenum</italic> and reinforced the value of the different themes. The sixth step, informed by the findings from both the questionnaire and the workshop, led to insights and recommendations detailed in the discussion and conclusion sections of the paper.</p>
<p>Because both the survey and the workshop were conducted in Norwegian, the authors translated all the quotations from Norwegian into English for the sake of this paper. The raw material for this paper, the data, including tables, remarks, and quotations in Norwegian, is available as an attachment to the paper (<xref ref-type="bibr" rid="r8">Grote et al., 2024</xref>).</p>
</sec>
</sec>
<sec id="s3">
<title>3. Findings</title>
<sec id="s3a">
<title>3.1. Findings from the Questionnaire</title>
<disp-quote>
<p><bold>&#x201C;Artificial Intelligence in PhD Education at University and College Libraries: Survey for the Libraries Network for PhD Support (March/April 2023)&#x201D;</bold></p>
</disp-quote>
<p>A total of 57 respondents participated in the survey, with 44 (77.2&#x0025;) working at university libraries, 11 (19.3&#x0025;) at college libraries, and 2 (3.5&#x0025;) at other institutions. When participants were asked to assess their own competence with AI-based tools, the majority indicated that they were familiar with available tools &#x201C;to some extent&#x201D; (68.4&#x0025;), while 22.8&#x0025; felt they knew the tools &#x201C;fairly well&#x201D;. 8.8&#x0025; stated that they had no knowledge of these tools, and none considered their knowledge to be &#x201C;very good.&#x201D; (<xref ref-type="fig" rid="fg001">Figure 1</xref>).</p>
<fig id="fg001">
<label>Fig. 1:</label>
<caption><p>How well do you think you know the AI-based tools available? (Very well / Quite well / Somewhat / Not at all).</p></caption>
<graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="figures/LIBER_2024_34_Grote_fig1.jpg"/></fig>
<p>The survey revealed that artificial intelligence receives significant attention at participants&#x2019; institutions, with 61.4&#x0025; responding &#x201C;to a high degree&#x201D; and 31.6&#x0025; &#x201C;to a moderate degree.&#x201D; Only 5.3&#x0025; replied &#x201C;to a small degree.&#x201D; (<xref ref-type="fig" rid="fg002">Figure 2</xref>).</p>
<fig id="fg002">
<label>Fig. 2:</label>
<caption><p>Is artificial intelligence getting attention at your institution? (To a high degree / To a moderate degree / To a low degree).</p></caption>
<graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="figures/LIBER_2024_34_Grote_fig2.jpg"/></fig>
<p>Many participants had already received questions about the use of AI-based tools (77.2&#x0025;) from colleagues, researchers, and students. Furthermore, 36.8&#x0025; of respondents had already taught or provided guidance on the use of AI-based tools. AI-based tools mentioned in this context included ChatGPT (56.1&#x0025;), Rayyan (35.1&#x0025;), and Keenious (24.6&#x0025;). Several tools that were repeatedly mentioned in user comments included Grammarly, Transcribus, Litmaps, Iris, ASReview, and Elicit. Similarly, the responses regarding potential benefits of using AI-based tools also diverge. On one hand, there are benefits related to tools that &#x201C;provide suggestions and ideas for further project development&#x201D; (66.7&#x0025;) and &#x201C;assist with writing&#x201D; (54.4&#x0025;). On the other hand, there are benefits associated with tools that offer &#x201C;increased efficiency&#x201D; (63.2&#x0025;) or support &#x201C;better data analysis&#x201D; (22.8&#x0025;). &#x201C;More accurate results&#x201D; and &#x201C;improved decision-making&#x201D; were also considered as benefits (8.8&#x0025; each). (<xref ref-type="fig" rid="fg003">Figure 3</xref>).</p>
<fig id="fg003">
<label>Fig. 3:</label>
<caption><p>What advantages do you see in using AI-based tools? (Select all that apply) (Increased efficiency / More accurate results / Better data analysis / Improved decision-making / Assists with writing / Provides input and ideas for further project development / Other (please specify)).</p></caption>
<graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="figures/LIBER_2024_34_Grote_fig3.jpg"/></fig>
<p>The majority of participants (87.7&#x0025;) identified ethical challenges associated with the use of AI-based tools, with 54.4&#x0025; being &#x201C;somewhat concerned&#x201D; and 33.3&#x0025; being &#x201C;very concerned.&#x201D; (<xref ref-type="fig" rid="fg004">Figures 4</xref> and <xref ref-type="fig" rid="fg005">5</xref>). The most significant ethical challenge reported was bias (84.2&#x0025;), followed by plagiarism (78.9&#x0025;), cheating (68.4&#x0025;), copyright (66.7&#x0025;), and co-authorship (45.6&#x0025;) (<xref ref-type="fig" rid="fg006">Figure 6</xref>). In their comments, participants highlighted ethical concerns related to data security, misinformation, and a loss of critical thinking due to people being &#x201C;blinded&#x201D; by good language. Additionally, participants mentioned socio-political aspects related to AI production (manual machine learning, environmental aspects) and its effects on opinion formation in society, as one participant put it: &#x201C;I fear that AI will lead to increased apathy in society, uncertainty about what is true and not true, and indifference toward research findings and societal issues, i.e., reduced engagement.&#x201D;</p>
<fig id="fg004">
<label>Fig. 4:</label>
<caption><p>What challenges do you see in using AI-based tools? (Select all that apply) (Difficulties in understanding the technology / Limited availability of tools / Lack of resources for training and implementation / Integration with existing systems / Ethical challenges / Other (please specify)).</p></caption>
<graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="figures/LIBER_2024_34_Grote_fig4.jpg"/></fig>
<fig id="fg005">
<label>Fig. 5:</label>
<caption><p>How concerned are you about the ethical implications of using AI-based tools in research? (Very concerned / Somewhat concerned / Not particularly concerned).</p></caption>
<graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="figures/LIBER_2024_34_Grote_fig5.jpg"/></fig>
<fig id="fg006">
<label>Fig. 6:</label>
<caption><p>What ethical challenges do you see in the use of AI-based tools? (Select all that apply) (Copyright / Co-authorship / Plagiarism / Bias / Cheating / Other (please specify)).</p></caption>
<graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="figures/LIBER_2024_34_Grote_fig6.jpg"/></fig>
<p>In addition to ethical aspects, the difficulty in understanding the technology (64.9&#x0025;) and a lack of resources for training and implementation (57.9&#x0025;) were considered significant challenges when using AI-based tools. Integrating AI-based tools with existing systems (31.6&#x0025;) and limited tool availability (28.1&#x0025;) were also seen as obstacles (<xref ref-type="fig" rid="fg004">Figure 4</xref>). In their comments, participants mentioned challenges such as &#x201C;uncertainty about the limitations of different AI-based tools,&#x201D; &#x201C;risk of misunderstanding AI technology, uncritical use, use for the wrong purposes, bias, and poor empiricism, GDPR issues, and data security,&#x201D; and &#x201C;lack of risk assessment.&#x201D; The lack of transparency and uncertainty about verifiability and truthfulness were also mentioned: &#x201C;How do we know if AI provides reliable results?&#x201D; Furthermore, several respondents stated that building expertise, gaining an overview and a deeper understanding were difficult due to the diversity of tools and the commercial hype around individual applications. Additionally, several participants highlighted challenges in using new and untested tools: AI poses challenges in terms of source criticism and &#x201C;can be detrimental to creativity and deep thinking in the writing process.&#x201D;</p>
<p>The participants in the survey were also asked what types of training or resources they would prefer in order to improve their knowledge of AI-based tools. Collaboration with other library staff or researchers experienced with AI-based tools (66.7&#x0025;), training programs (63.2&#x0025;), workshops on artificial intelligence in research (63.2&#x0025;), and training on specific AI-based tools (63.2&#x0025;) were seen as valuable offerings. In their comments, participants expressed a desire for brief overviews of existing tools and examples for their use, the acquisition of specific tools for testing with staff and students, and basic education on how AI works. Several participants also focused on the risk aspect and &#x201C;security clearance of available tools for various academic purposes&#x201D;: &#x201C;It is important that we preserve our critical thinking and avoid a &#x2018;hallelujah&#x2019; approach!&#x201D;</p>
<p>Finally, participants had the opportunity to provide their own comments on the use or thematic exploration of intelligent tools in PhD education. The speed of change and uncertain access to continually new tools were highlighted as significant challenges. There is a need for skills development, guidance, and a desire for a critical approach:</p>
<list list-type="bullet">
<list-item><p>&#x201C;Important to be able to distinguish the wheat from the chaff and choose a secure product for the right/suitable task.&#x201D;</p></list-item>
<list-item><p>&#x201C;Important to encourage sobriety: Some answers AI provides may seem impressive without actually being so.&#x201D;</p></list-item>
<list-item><p>&#x201C;Everything from AI must be treated with source criticism. Efficiency should not be confused with sheer laziness. Academic work will still be time-consuming!&#x201D;</p></list-item>
<list-item><p>&#x201C;The most important thing is probably that we keep up and see how we can use the tool in a good and sensible way as a resource in research.&#x201D;</p></list-item>
<list-item><p>&#x201C;There is a need for skills development among us working in the library. We need help and resources &#x2013; it&#x2019;s great if something comes on PhD on Track.&#x201D;</p></list-item>
</list>
<p>Additional questions were asked about the national learning resource PhD on Track (<xref ref-type="fig" rid="fg007">Figures 7</xref> and <xref ref-type="fig" rid="fg008">8</xref>). Over 90&#x0025; of participants believe it is important for the resource to include information about AI based tools (quite important 33.3&#x0025;, very important 57.9&#x0025;). Ethical aspects of AI-based tool use are central for most participants (91.2&#x0025;), followed by examples of AI-based tool usage in research (78.9&#x0025;) and an overview of different AI-based tools (61.4&#x0025;). Approximately half of the participants want guidance on using AI-based tools (49.1&#x0025;), while 29.8&#x0025; want comparisons of different AI-based tools.</p>
<fig id="fg007">
<label>Fig. 7:</label>
<caption><p>How important do you think it is that PhD on Track includes information about AI-based tools? (Very important / Quite important / Not particularly important).</p></caption>
<graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="figures/LIBER_2024_34_Grote_fig7.jpg"/></fig>
<fig id="fg008">
<label>Fig. 8:</label>
<caption><p>What kind of information do you think should be included in PhD on Track? (Select all that apply) (Overview of different AI tools / Guidelines for using AI tools / Examples of using AI tools in research / Comparisons of different AI tools / Ethical aspects of using AI tools / Other (please specify)).</p></caption>
<graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="figures/LIBER_2024_34_Grote_fig8.jpg"/></fig>
<p>In their comments, participants recommend that PhD on Track should focus on a selection of the &#x201C;most general&#x201D; tools. Several propose a focus on &#x201C;limitations of the various tools&#x201D;, &#x201C;examples of when AI doesn&#x2019;t work.&#x201D; Similarly, topics such as source criticism, information about the algorithms and sources used by AI-based tools, and data security when using AI are highlighted. 40.4&#x0025; of participants suggest creating a separate module on artificial intelligence in PhD on Track, while others propose integrating information about AI-based tools into sections on Searching (66.7&#x0025;), Writing (64.9&#x0025;), Reviewing (57.9&#x0025;), Copyright (45.6&#x0025;), Data management (29.8&#x0025;), and Co authorship (28.1&#x0025;). In their comments, several participants mention the possibility of &#x201C;creating a separate module on AI in an introductory phase when this is new and of great interest, and gradually integrating it into the other modules.&#x201D;</p>
</sec>
<sec id="s3b">
<title>3.2. Findings from the Workshop</title>
<disp-quote>
<p><bold>&#x201C;Artificial intelligence as a topic for PhD learning resources like PhD on Track&#x201D; (OsloMet, Norway, ultimo April 2023)</bold></p>
</disp-quote>
<p>According to the steps 3&#x2013;5 in the thematic analysis method, the authors grouped the statements into 6 thematic categories, each containing aspects and perspectives of the participants wearing the red, the yellow and the green hat:</p>
<list list-type="simple">
<list-item><p>Theme 1: Technology</p></list-item>
<list-item><p>Theme 2: Knowledge about AI</p></list-item>
<list-item><p>Theme 3: Ethical issues</p></list-item>
<list-item><p>Theme 4: A need for more competence</p></list-item>
<list-item><p>Theme 5: The role of the library</p></list-item>
<list-item><p>Theme 6: Learning resources</p></list-item>
</list>
<p>We started with a look at the challenges, expectations and opportunities connected to rapidly changing technology, knowledge about AI, and ethics. Closely related to those themes are the need for more competence, the (future) role of the libraries &#x2013; and the purpose and design of learning resources. Ethical issues were addressed throughout the workshop: When we looked at the material and notes from the workshop, we found that different aspects of ethics were addressed, and that mutually related themes were discussed, spanning from technology and knowledge, and from immaterial rights via plagiarism and source criticism to sustainability.</p>
<sec id="s3b1">
<title>3.2.1. Theme 1: Technology</title>
<p>The answers linked to the keyword <italic>technology</italic> summarise challenges due to the rapid development in artificial intelligence. Key concerns gathered from the workshop data had a focus on the fact that a service based on AI quickly could become out of date.</p>
<p>Furthermore, this opens for issues with technical maintenance and changing legal rules. The first issue involves infrastructure that the library rarely has control over, and which is often expensive. The other points to the fact that existing laws may not be updated in line with current technological advancements and can thus be exploited by companies that want to position themselves in the market.</p>
<p>The technical aspect, highlighted by the participants of the workshop, contains elements in relation to where the information comes from, where the data that is fed into the system ends up and which companies are behind the services. In addition, as a reinforcement of possible bias, the English language and Western culture were pointed out to often set the tone in innovative technology. This was considered a greater challenge. Several of the participants were concerned about this, which they also related to ethical issues regarding ethnicity, gender, different languages and cultures. As the participants pointed out: &#x201C;Language and the choice of words are never objective or neutral.&#x201D; &#x201C;The English language and American culture characterise the tools, and this might create problems in other languages and cultures.&#x201D;</p>
<p>Another perspective that emerged from the design task regarding technology and development concerned the varying expectations participants had. The strategic aspect was central, as several participants expressed the view that the technological development within AI could not be stopped. In particular, there was an expectation that there would be more specific tools and not just general tools to support researchers and PhD candidates. There was also general agreement that the above-mentioned tools will be able to change the way research is conducted, for example processing large amounts of data. In conclusion, there were expectations that research institutions must be proactive to take action as artificial intelligence is developing rapidly and is always ahead of us with regard to ethics, pedagogy, and assessment.</p>
<p>When participants were asked what opportunities they saw in the new wave of AI services, the focus was primarily on research and teaching. Mentioned examples include using AI to check the quality of data, or give an overview of large quantities of materials. The feedback on opportunities also predicted a change in the way research will be done in the future, where proofreading, translation, disposition, and summarisation of knowledge will be more automated. AI was also seen as a potential sparring partner, offering unforeseen opportunities in writing support.</p>
</sec>
<sec id="s3b2">
<title>3.2.2. Theme 2: Knowledge about AI</title>
<p>Most of the participants highlighted the challenge of general information on artificial intelligence quickly becoming outdated and the difficulty of conveying the complex nature of the topic. More specifically one participant stated: &#x201C;Difficult to convey the complexity. People turn to websites for answers!&#x201D;</p>
<p>Additionally, the participants expressed a need for more certainty about how to use artificial intelligence effectively. The participants agreed on the importance of defining and categorising the types of AI relevant to specific subjects and establishing a professional environment for their presentation.</p>
<p>The participants emphasised the difficulty of maintaining an overview and deciding which tools are relevant for presentation in library resources. This frustration is expressed in the distinctive statement: &#x201C;How can we have the overview?&#x201D; Some tools can be helpful, while others must be approached cautiously. Participants voiced their concerns: &#x201C;Clarify pitfalls, show weaknesses.&#x201D; Given the diversity of different policies regarding the access and use of artificial intelligence, the participants expressed uncertainty about how this could be conveyed.</p>
</sec>
<sec id="s3b3">
<title>3.2.3. Theme 3: Ethical Issues</title>
<p>Some of the participants were concerned with the complexity of legal aspects and copyright law when wearing the red hat: One group highlighted the &#x201C;lack of laws and regulation&#x201D; and emphasised the necessity &#x201C;to take care of the copyright&#x201D;. Another group mentioned that &#x201C;tools are based on our work, and on our data, without us having insight into or control over the work&#x201D;. As one group put it, &#x201C;legal implications &#x2013; do we understand everything?&#x201D;, another was concerned with &#x201C;intellectual property, personal data, plagiarism issues&#x201D;.</p>
<p>One group mentioned that artificial intelligence-based tools implicate questions concerning &#x201C;access, institutional policy, data processor agreement etc&#x201D;. The implications of GDPR and the management of personal data were noted, with worries about &#x201C;big tech, storage of data, all the profiles that are created&#x201D;.</p>
<p>A red alert pointed out by some participants is that the use of AI-based tools may lead to plagiarism among PhD candidates. They express a concern: &#x201C;Can an assignment be rejected?&#x201D; Transparency is a keyword here. People are &#x201C;insecure of guidelines, ethics, and legal issues&#x201D;. It is also &#x201C;an ethical challenge that lack of competence on artificial intelligence might implicate that we misinterpret the results and the starting point for them&#x201D;.</p>
<p>AI-based tools are &#x201C;developed by private companies for profit reasons&#x201D;, and &#x201C;commercially based systems might change the intended content without doing this in an academic way&#x201D;. The problem is that we do not know &#x201C;who is in charge of the content&#x201D;.</p>
<p>Regarding evaluation of sources, there is obviously a need for transparency in the quality of text and data. There is an obvious need for guidance, as the libraries often experience patrons requesting literature and references that do not exist after using AI-based tools.</p>
<p>When they had a look at AI-based tools from another point of view, wearing the yellow hat and trying to indicate expectations and possibilities, several groups advocated good training and good websites as their main expectation regarding how to handle ethical issues: &#x201C;Try to collect in one place what concerns ethical problems and limitations in AI-based tools and AI in general. Refer to that place when teaching different tools practically&#x201D;. The groups suggested that it should be &#x201C;clearly explained what source criticism implies for credibility with AI software&#x201D; and that &#x201C;good guidance in how to use/how not to use&#x201D; the tools should be offered. There was a strong call to &#x201C;link the guidance to ethical norms for good research practice&#x201D; and to &#x201C;promote the actual ideals of knowledge. For example, knowledge over publication, maturation over production. Give good examples of responsible use&#x201D;.</p>
<p>With the green hat on their heads, the groups were expected to find (positive) possibilities concerning AI-based tools. Here, they pointed out that an ethical awareness might come through: &#x201C;With good guidance in artificial intelligence, there is a good chance that ethics will become an important component for those who use tools&#x201D;. &#x201C;We can make ChatGPT better with our input (do we really have a responsibility here?)&#x201D;, and &#x201C;we can show the academic ideals in the face of AI: Originality, thoroughness, verifiability and so on&#x201D;.</p>
<p>The discussion on ethical challenges and possibilities brings us to the next part, which is the need for competence enhancement for library staff.</p>
</sec>
<sec id="s3b4">
<title>3.2.4. Theme 4: A Need for more Competence</title>
<p>Among the challenges pointed out by the groups is the need to &#x201C;understand possibilities and limitations (i.e., to know what the tools are for)&#x201D;. Further, there is a certain and &#x201C;unpredictable risk of not being updated&#x201D;, and of course, it is &#x201C;difficult to maintain a competence which is sufficient&#x201D;. Another issue is that there are &#x201C;so many different tools, and it is difficult to have an overview of what you canNOT achieve with them&#x201D;. It is also a &#x201C;problem with provenance, and what you should trust when there is an increasing amount of fake news&#x201D;.</p>
<p>Participants expressed uncertainty given the large quantity of tools. &#x201C;There are a lot of tools. Which ones do we choose?&#x201D;&#x201D; Are they safe to use?&#x201D; An important aspect of this is of course the rapid pace of development in AI technology: &#x201C;Those who do not keep up with artificial intelligence will fall behind. This will cause big differences&#x201D;.</p>
<p>To address these challenges, library employees clearly need enhanced competence regarding AI-based tools. Concerning expectations, some participants suggested the creation of &#x201C;a discussion forum to raise awareness about tools and possibilities and limitations&#x201D; and to share examples of best practice. &#x201C;The technical aspects must be problematised,&#x201D; they noted.</p>
<p>Looking at the opportunities, some groups indicated that one can help each other, and &#x201C;use the skills of others to put together a good course&#x201D;. They suggested &#x201C;organise workshops for PhD candidates, and for their supervisors as well&#x201D;. Of course, one has to train the trainers: There is a need for &#x201C;workshops and training for librarians. We must be able to do it ourselves&#x201D;. This implicates certain questions about time allocation and responsibility: &#x201C;Who follows up?&#x201D;</p>
</sec>
<sec id="s3b5">
<title>3.2.5. Theme 5: The Role of the Library</title>
<p>For academic libraries, the rise of AI brings along certain challenges, risks and concerns. Several participants highlighted the challenge that providing information and guidance on artificial intelligence entails responsibility in terms of selecting and evaluating specific (commercial) tools. This raises questions about the &#x201C;boundaries between recognition and recommendations&#x201D; and between information and &#x201C;marketing&#x201D;.&#x201D; Presentation of such a complex topic: It is difficult to stay neutral&#x201D;.</p>
<p>The role of the library is crucial in this context: Will the library be perceived as an &#x201C;authority&#x201D; with &#x201C;credibility&#x201D; and &#x201C;reliability,&#x201D; and what should be the &#x201C;normative&#x201D; nature of library learning resources? &#x201C;Could the learning resource (i.e. PhD on Track) become an authority about something that the institutions do not agree on internally?&#x201D; The question of competence comes with it: &#x201C;Does one have enough expertise in the libraries to create good modules in a learning resource?&#x201D; &#x201C;Are librarians the right group to create such tools?&#x201D;</p>
<p>&#x201C;We expect people with technical expertise and those with a reflective and questioning approach to come together&#x201D;: The library also plays a role in the development of artificial intelligence; it can &#x201C;contribute to connecting academic principles to developments in artificial intelligence&#x201D; and be involved in the development of policies from institutional to national levels.</p>
<p>Workshop participants also identified new opportunities in addressing artificial intelligence in research library learning resources: It can provide &#x201C;greater confidence for the library if we can introduce artificial intelligence tools into education in a good way.&#x201D; The focus on AI can be &#x201C;positive for collaboration with other actors,&#x201D; and &#x201C;the library becomes involved in the disciplines.&#x201D;</p>
</sec>
<sec id="s3b6">
<title>3.2.6. Theme 6: Learning Resources</title>
<p>As the workshop was organised by several of the editors of PhD on Track and explicitly aimed to collect contributions to the development of the resource, some of the feedback focuses on the development of the website <ext-link ext-link-type="uri" xlink:href="http://phdontrack.net">phdontrack.net</ext-link>. When appropriate we have generalised &#x201C;PhD on Track&#x201D; in the translation to &#x201C;learning resources&#x201D;.</p>
<p>The workshop participants identified several potential challenges when developing new content on artificial intelligence for learning resources aimed at PhD candidates and young researchers. The heterogeneity of the target audience makes it difficult to tailor the learning content to the users. &#x201C;Challenges related to different academic traditions&#x201D; and varying levels of prior knowledge and competency within the target group were emphasised. At the same time, the issue of creating false expectations was raised, such as assumptions of improved efficiency: &#x201C;Artificial intelligence will create expectations, for example, about saving time, etc. But is that realistic?&#x201D; &#x201C;Can artificial intelligence be a time thief rather than a helper?&#x201D; Furthermore, several participants highlighted the didactic consequences of extensive use of artificial intelligence. &#x201C;Strong focus on the technical aspects, potentially overshadowing the educational aspect,&#x201D; can lead to a de-prioritisation of learning and the development of fundamental skills, especially in the writing process: &#x201C;One learns by writing; we must write ourselves,&#x201D; &#x201C;it is important to find one&#x2019;s own voice, one&#x2019;s personality (writing style).&#x201D; Some participants expressed concerns that artificial intelligence may contribute to &#x201C;intellectual laziness,&#x201D; resulting in &#x201C;uniform&#x201D; and &#x201C;fictionless&#x201D; texts: &#x201C;Everything ends up looking the same in the end.&#x201D;</p>
<p>Similarly, participants&#x2019; expectations had a strong didactic focus. Research library learning resources &#x201C;should have a critical, pedagogical approach that promotes responsible use and prevents misuse&#x201D; and &#x201C;enable people to engage with it in the best way possible (both find utility, motivate, and be critical),&#x201D; &#x201C;help people make informed choices.&#x201D; A prerequisite for this is that the content is continuously updated: Users expect learning resources &#x201C;to provide useful guidance that doesn&#x2019;t become outdated, even if the tools do.&#x201D; Existing skills should not be phased out either: New information about AI &#x201C;must be balanced with information about &#x2018;craftsmanship,&#x2019; including searching, writing, source criticism, etc.&#x201D;</p>
<p>An online resource should also &#x201C;acknowledge that different academic communities may have different usage/support needs&#x201D; and provide &#x201C;support for both PhD candidates and teachers and administrators.&#x201D; To reach the target audiences, participants suggested to &#x201C;invite PhD students to take part in the development&#x201D; of learning resources, &#x201C;hold webinars on artificial intelligence for PhD students,&#x201D; and &#x201C;follow up on the use of AI-based tools among PhD candidates through user surveys.&#x201D; The participants recommended setting up &#x201C;a reference group, a working group that develops resources on artificial intelligence.&#x201D;</p>
<p>Developing AI-related content for library learning resources helps &#x201C;highlight the positive aspects, the opportunities that exist, for example with cases, interviews, films, etc.&#x201D; and &#x201C;provide support on how to use AI-based tools.&#x201D; Participants emphasised that it is important &#x201C;to have contact with research communities to gain experiences,&#x201D; &#x201C;to find the users&#x2019; level&#x201D; and to consider &#x201C;differences between disciplines.&#x201D; They expressed also an &#x201C;expectation of democratisation/equalising some differences, establishing a common denominator (by helping many reach an OK level).&#x201D;</p>
<p>Learning resources about AI should &#x201C;explain how the technology itself works,&#x201D; but also show &#x201C;opportunities for developing ideas using tools.&#x201D; The focus cannot be purely technical but must also consider perspectives like &#x201C;learning theory and scholarly practice.&#x201D; The presentation of AI should be &#x201C;balanced: Not only writing and AI but also data analysis and other parts of the PhD life,&#x201D; and &#x201C;describe possibilities and limitations of what an artificial intelligence tool can assist with at all stages of the research process.&#x201D; Different opinions were expressed about how information about AI should be integrated into learning resources like PhD on Track. While some &#x201C;prefer it to be integrated rather than a separate track,&#x201D; &#x201C;addressing&#x2026; opportunities and challenges in each of the main categories on the website,&#x201D; others recommended a &#x201C;more overarching module on artificial intelligence.&#x201D;</p>
<p>When describing expectations and opportunities related to AI in learning resources, several participants mentioned concepts like a &#x201C;tool compass,&#x201D; a &#x201C;selection tool,&#x201D; a &#x201C;guide&#x201D; that could be &#x201C;supportive in understanding which tools are available and what possibilities they offer&#x201D;: &#x201C;matrices of different tools,&#x201D; an &#x201C;up-to-date overview, clear, user-friendly,&#x201D; preferably experience-based with concrete &#x201C;examples of use with positive/negative outcomes&#x201D; (e.g. &#x201C;transcription, systematic review, text mining&#x201D;) and &#x201C;best practice tips.&#x201D; Different opinions were expressed about how detailed such a presentation should be. Some suggested &#x201C;showing the whole landscape with both pitfalls and opportunities,&#x201D; while others recommended &#x201C;not going into depth on specific tools or aiming to provide overviews of tools.&#x201D; Additionally, it was suggested to provide lists of links to &#x201C;relevant guidelines, resources, projects, research.&#x201D;</p>
<p>Several participants described opportunities for more interactivity in learning resources. Firstly, AI related tasks can enhance active learning on webpages: &#x201C;Select an example,&#x201D; &#x201C;discuss general usage,&#x201D; &#x201C;examples of where it went wrong,&#x201D; &#x201C;what is YOUR understanding of YOUR results,&#x201D; &#x201C;discuss something you are familiar with [&#x2026;], but ask to find limitations, be critical, ethical concerns&#x201D;. Secondly, participants expressed a desire for more discussion of personal practices, both with PhD candidates and colleagues: Since artificial intelligence is new to everyone, it is important to have &#x201C;an opportunity to share experiences with teaching/guidance related to artificial intelligence.&#x201D; Discussions can be encouraged by &#x201C;including cases that show different experiences [&#x2026;], discussing categories of artificial intelligence tools and their use in different phases: the search phase, the writing phase, and so on.&#x201D; Formats suggested for this include a &#x201C;forum wall for sharing experiences,&#x201D; a &#x201C;chat&#x201D; or a &#x201C;hotline,&#x201D; or a link to &#x201C;other actors that have forums&#x201D; like BISON Dataverse. Intelligent technical support services were also proposed: &#x201C;Chat based on artificial intelligence that responds to inquiries we receive at the library,&#x201D; &#x201C;using artificial intelligence to create a newsfeed linked to keywords in discussions about what&#x2019;s happening in the artificial intelligence world. The feed with new academic articles.&#x201D;</p>
</sec>
</sec>
</sec>
<sec id="s4">
<title>4. Discussion</title>
<p>Thinking of the traditional strength of academic research libraries in general, one is tempted to look at AI-based tools from the point of view of finding, managing and evaluating relevant and trustworthy sources: A subject librarian, or a research librarian, should be the one to help PhD candidates, as well as students and researchers to find their way in the jungle of information, even if this jungle these days is overgrown with the emergence of AI-based tools. In an ideal world, the library staff could be the ones to perform guidance, help researchers decide which tools are helpful for their project, give advice on how to use them and be able to inform and discuss the pros and cons of the tool in question.</p>
<p>The findings of both the survey and the workshop conducted for this study show an uncertainty among librarians about their own role regarding the use of AI in academia. Do they have the competency and the overview to offer qualified advice? How should they navigate the rapid development of technology, the constant change of existing tools and the emergence of new ones? How to deal with the variable, non-reproducible output of AI-based tools which is generated without any form of quality control? What is their role regarding assessment and ethical judgement of the use of AI-based tools in research? While the library services aim to provide transparent workflows, quality-checked metadata and systematic research methods, the new AI-based tools appear as &#x201C;black boxes&#x201D; with uncertain or opaque work results. As Guidotti states, AI-based tools &#x201C;hide their internal logic to the user. This lack of explanation constitutes both a practical and an ethical issue.&#x201D; (<xref ref-type="bibr" rid="r9">Guidotti et al., 2018</xref>). To avoid this, The Norwegian National Research Ethics Committees has recommended that AI research should aim to create &#x201C;glass boxes&#x201D;, i.e. systems that can be inspected (<xref ref-type="bibr" rid="r16">The National Committee for Research Ethics in Science and Technology, 2019</xref>). In the present, however, academic librarians encounter a constantly expanding variety of different technological offerings and the associated technical, methodological, and ethical issues. Instead of transparency, complexity, opacity, and continual change prevail. This leads to questions about the legitimacy of the research support offered by the libraries and a need for defining the role of the library in the research process.</p>
<p>The survey reveals that artificial intelligence is receiving significant attention at Norwegian research libraries, with a particular emphasis on ethical challenges. Some of the key concerns revolve around bias, plagiarism, cheating, and copyright violations. Participants also highlighted problems concerning data security, misinformation, and a potential loss of critical thinking due to the extensive use of AI based tools. These challenges can affect public opinion formation and the recognition of research results in society. An ethical awareness concerning AI and information is therefore of high importance. Even more so, because due to the time it takes to draft and approve laws, laws will always be in arrears with respect to fast developing and upcoming technologies. In conclusion, research institutions must be proactive in addressing the swift advancement of artificial intelligence and be involved in the discussion of ethical issues, pedagogical consequences and policymaking related to the use of AI-based tools in academia.</p>
<p>The rapid development in the field brings with it a challenge for library staff to keep pace. The majority of participants indicated some familiarity with available AI-based tools, and most of them had received questions about these tools. More than a third had already provided guidance or teaching on the use of AI-based tools. This reveals a strong need and desire for knowledge updates among library staff, both regarding the functionality and methods for critical evaluation of results generated by an artificial intelligence. The difficulty of maintaining an overview and assessing tools with regard to presentation on library resources was of main concern for many participants. The uncertainty with respect to potential pitfalls and weaknesses of AI-based tools is exacerbated by the fact that public representation is dominated by the developers&#x2019; perspective and their commercial interests.</p>
<p>Considering the variety of policies governing access to and utilisation of artificial intelligence in academia, the findings of the workshop ascertain uncertainty among library staff about how a critical assessment of available tools could be achieved. In conclusion, continuing education with the aim to enhance the skills, knowledge and faculty of critical judgement among library staff is a key factor for research libraries encountering the development of AI. A need for more competence development among library staff is one of the most important findings from the survey and the workshop, corresponding with the findings of recent studies about the academic libraries&#x2019; role facing the advancement of AI technology (<xref ref-type="bibr" rid="r5">Gasparini &#x0026; Kautonen, 2022</xref>). New forms of information literacy are clearly on their way, and the roles of library staff and libraries are changing accordingly.</p>
<p>Education in &#x201C;transferable&#x201D; or &#x201C;generic&#x201D; skills for PhD candidates has been a part of library services for many years, but with the proliferation of artificial intelligence, these skills have become more in demand. The results from the survey and the workshop indicate a requirement to integrate questions about artificial intelligence in many of the topics these courses address. On one hand, there is a need to learn how to use new intelligent tools to support practical research work, such as in searching, literature review, and writing. On the other hand, there is a necessity to critically evaluate the consequences of such use. In this context, particular significance is attributed to source criticism, which traditionally constitutes a core competence of academic libraries. However, with the proliferation of artificial intelligence, it gains new relevance while simultaneously facing new methodological challenges.</p>
<p>As the survey participants emphasise, the training provided by libraries should not have a purely practical or technical focus, but must also consider methodological, epistemological, and ethical implications. This will require a combination of instructive teaching and more discussion-based, student-active learning methods. Given the rapid development of AI-based tools and the challenge of keeping one&#x2019;s own knowledge up to date, many workshop participants underline the importance of sharing experiences regarding scholarly practice and concrete examples of working with these tools.</p>
<p>However, many participants reported a lack of resources for training and education on AI-based tools. They particularly valued cooperative learning formats, including collaboration with other library staff or researchers experienced with AI-based tools and workshops on artificial intelligence in research. Participants expressed a desire for concise overviews of existing AI-based tools, hands-on testing of specific tools, and basic education on how AI works, all with a critical perspective. The findings from the survey and the workshop indicate a plea for a sober approach to AI in academic libraries, a strong appeal for a realistic attitude that considers both the challenges, risks and opportunities of technological development.</p>
<p>Providing information and guidance on artificial intelligence entails responsibilities in terms of selecting and evaluating specific (commercial) tools. The ownership of AI-based tools impedes informed and unrestricted access. Economic and copyright reasons have resulted in the fact that &#x201C;the biggest models deployed today, such as GPT-4 and PaLM 2, are closed source, proprietary models&#x201D;. (<xref ref-type="bibr" rid="r10">Harris, 2023</xref>). In light of this context, the workshop has revealed questions about the boundaries between recognition and recommendations and the difficulty to stay neutral regarding the use of commercial tools. Academic libraries play a key role in this matter: Will they be perceived as authorities with credibility and reliability, and how normative should advice given by the libraries be? The question of competence comes with it: Which degree of expertise is needed, has library staff necessary expertise to create good learning resources, and how could the requisite development of new competences be achieved? At the same time, academic libraries also have a role in the evolution of artificial intelligence; they can contribute to connecting academic principles to the development of new technology and be involved in the making of policies from institutional to national levels. Academic libraries should not aim to drive their activities and attempts regarding AI on their own. Cooperation to share experiences should be mandatory. This &#x2013; and addressing artificial intelligence in the education conducted by research libraries &#x2013; can contribute to their reputation as reliable and expertised partners in the research process and have a positive effect on their collaboration with other actors and their involvement in the disciplines.</p>
<p>When considering the possible effects AI-based tools may have on research and scholarly publication, a prompt reaction from the leadership in academic libraries is necessary. Given the complex landscape and expectations to help PhD researchers, discussion in the libraries should circle around skills. For instance, library staff can practically step into the role of PhD researchers and use AI-based tools actively. By gathering hands-on experience with the tools, library staff could become sparring partners for students and researchers in this field. Furthermore, library staff should tie academic integrity around using different AI-based technologies and tools. Then, library staff could develop a roadmap for productive and ethical use of technology.</p>
<p>The call for training and further education of the library staff means that library leaders must allow the librarians to set aside enough time to keep themselves informed and updated. To achieve this, to maintain the knowledge, and to have a place for Q&#x2019;s and A&#x2019;s, online learning resources play a key role in individual, part-time, and professional development learning. To collect information and educational resources at one central and easily accessible location seems to be the most practicable solution. Library staff expects the resource PhD on Track and the Libraries&#x2019; Network for PhD Support to play a significant role in this, with a strong focus on promoting good research practices and fostering discussions on the ethical aspects of AI-based tool usage.</p>
<p>The workshop asked the participants for input regarding the development of learning resources and harvested multiple suggestions concerning content, design, and possible extensions of existing services. Library staff expects not only technical information, but a critical approach to the use of AI based tools in academia. A strong focus solely on technical aspects has the potential to create false expectations and to invite questionable practices in the use of AI. Therefore, focus on academic integrity and an ethical perspective on AI-based tools is recommended. The pedagogical approach should promote responsible use, prevent misuse, and aim to help researchers to make informed choices by describing both opportunities and limitations in the use of AI-based tools at all stages of the research process. In order to find the users&#x2019; level of competence and to take into account different needs in different research disciplines, it is recommended to involve PhD candidates, researchers, supervisors and administrators in the development of educational resources. Focus on workflow, advice for best practice in the use of AI-based tools (with examples of both positive and negative outcomes) and a strong interactive approach that allows the discussion of personal experiences are preferable ways to encounter the educational challenges that AI brings for researchers.</p>
</sec>
<sec id="s5">
<title>5. Recommendations</title>
<p>Based on our findings, we recommend the following:</p>
<list list-type="order">
<list-item><p>Academic libraries should take an active role regarding the introduction of AI in research. They should aim at developing a holistic approach, including both technical support, guidance on best practice and critical assessment of the use of AI-based tools. The library&#x2019;s teaching should be of a perspective-giving and dialogue-based type, technical instruction should be followed by discussion-based and student-active learning methods.</p></list-item>
<list-item><p>Library leadership ought to provide avenues for staff to acquire and sustain the required expertise for utilising and critically evaluating AI-based tools. To achieve this, it is essential for library management to establish collaborative arenas that allow staff to engage in projects related to the use and evaluation of AI-based tools, working with partners both within and beyond the campus.</p></list-item>
<list-item><p>Academic libraries should involve themselves in policy making by creating guidelines for good practice when AI-based tools are used in research. As an exemplary reference for best research practices, the national educational resource PhD on Track should undergo further development regarding the inclusion of AI-related topics.</p></list-item>
</list>
</sec>
<sec id="s6">
<title>6. Conclusion</title>
<p>The traditional role of academic research libraries, characterised by expertise in finding, managing, and evaluating trustworthy sources, faces new complexities with the emergence of new AI-based technologies. PhD candidates and researchers increasingly require education in AI-based tools and their responsible use. The results of a survey and a workshop with library staff working with research support in Norway reveal a degree of uncertainty among librarians regarding their role in the AI academic nexus. The rapid technological development, the opaqueness of AI outputs, ethical implications and the absence of guidelines create challenges in providing research support that aligns with academic integrity. Library staff must navigate the evolving AI landscape by continually updating their knowledge and competencies.</p>
<p>Learning resources targeted at PhD candidates play a key role in the skill development of teaching staff in academic libraries. The findings of this study demonstrate a need to integrate AI-related content into existing educational resources and activities. The development of learning resources in academic libraries should encompass both practical aspects and critical evaluation of AI-based tools. The ethical reflection of the use of AI-based tools should take centre stage in these endeavours, with consequences for teaching methods: Library instruction should combine technical support and guidance in best practice with discussion-based and student-engaged learning techniques. To accomplish this, it is necessary to establish spaces where staff can share their experiences, discuss examples of best practice and cooperate both with colleagues and researchers.</p>
<p>Library leadership should ensure that employees can gain &#x2013; and maintain &#x2013; necessary competence in the use and critical assessment of AI-based tools. Special focus should here be on the development of learning resources, being used as exemplary references for best research practices, including recommendations for the appropriate and ethically responsible application of AI-based tools. Library leadership should create arenas where employees can participate in projects regarding the use and assessment of AI-based tools together with relevant partners inside and outside the campus.</p>
<p>Academic libraries are not policy makers for the academic sector, but they can actively involve themselves in policy discussions by creating guidelines for good research practices in AI-based tool usage. Expertise in AI literacy can enhance the role of academic libraries as providers of key competencies in information literacy. Traditional library competences like source criticism, search documentation and reference management gain new relevance in the AI era.</p>
<p>Although retrieved in a qualitative study with a limited number of participants from a Norwegian context, the findings from the material reach far beyond the situation in Norway, and can have great relevance for academic libraries in general. In conclusion, academic libraries face the challenge of adapting to the rapid development of AI while preserving their core values of academic integrity and critical thinking. By addressing the complexities of AI with a sober, ethical, and collaborative approach, libraries can strengthen their role as reliable partners in the research process and enhance their collaboration with other stakeholders in academia.</p>
</sec>
</body>
<back>
<ref-list>
<title>References</title>
<ref id="r1"><mixed-citation>Braun, V., &#x0026; Clarke, V. (2006). Using thematic analysis in psychology. <italic>Qualitative Research in Psychology</italic>, <italic>3</italic>(2), 77&#x2013;101. <ext-link ext-link-type="doi" xlink:href="10.1191/1478088706qp063oa">https://doi.org/10.1191/1478088706qp063oa</ext-link></mixed-citation></ref>
<ref id="r2"><mixed-citation>Braun, V., &#x0026; Clarke, V. (2012). Thematic analysis. In <italic>APA handbook of research methods in psychology</italic> (Vol 2). <italic>Research designs: Quantitative, qualitative, neuropsychological, and biological</italic> (pp. 57&#x2013;71). American Psychological Association. <ext-link ext-link-type="doi" xlink:href="10.1037/13620-004">https://doi.org/10.1037/13620-004</ext-link></mixed-citation></ref>
<ref id="r3"><mixed-citation>Braun, V., Clarke, V., Hayfield, N., &#x0026; Terry, G. (2018). Thematic analysis. In P. Liamputtong (Ed.), <italic>Handbook of research methods in health social sciences</italic> (pp. 1&#x2013;18). Springer. <ext-link ext-link-type="doi" xlink:href="10.1007/978-981-10-2779-6">https://doi.org/10.1007/978-981-10-2779-6</ext-link></mixed-citation></ref>
<ref id="r4"><mixed-citation>Eleni, N., &#x0026; Fotini, P. (2018). Developing interdisciplinary instructional design through creative problem-solving by the pillars of STEAM methodology. In M. Auer &#x0026; T. Tsiatsos (Eds.), <italic>Interactive mobile communication technologies and learning. IMCL 2017: Vol.725</italic>. <italic>Advances in intelligent systems and computing</italic> (pp. 89&#x2013;97). Springer. <ext-link ext-link-type="doi" xlink:href="10.1007/978-3-319-75175-7_10">https://doi.org/10.1007/978-3-319-75175-7_10</ext-link></mixed-citation></ref>
<ref id="r5"><mixed-citation>Gasparini, A., &#x0026; Kautonen, H. (2022). Understanding artificial intelligence in research libraries &#x2013; Extensive literature review. <italic>LIBER Quarterly: The Journal of the Association of European Research Libraries</italic>, <italic>32</italic>(1), 1&#x2013;36. <ext-link ext-link-type="doi" xlink:href="10.53377/lq.10934">https://doi.org/10.53377/lq.10934</ext-link></mixed-citation></ref>
<ref id="r6"><mixed-citation>Gasparini, A., &#x0026; Kautonen, H. (2023). <italic>AI-based tools</italic>. <ext-link ext-link-type="uri" xlink:href="https://www.doria.fi/handle/10024/186899">https://www.doria.fi/handle/10024/186899</ext-link></mixed-citation></ref>
<ref id="r7"><mixed-citation>Grierson, B. (2017, April 3). The concept garden &#x2013; 3 hats. <italic>Perth Innovation Blog</italic>. <ext-link ext-link-type="uri" xlink:href="https://perthinnovation.com/news/blog/the-concept-garden-3-hats/">https://perthinnovation.com/news/blog/the-concept-garden-3-hats/</ext-link></mixed-citation></ref>
<ref id="r8"><mixed-citation>Grote, M., Faber, H. C., &#x0026; Gasparini, A. A. (2024). Artificial intelligence in PhD education: New perspectives for research libraries [Data set]. <italic>Zenodo</italic>. <ext-link ext-link-type="doi" xlink:href="10.5281/zenodo.10730751">https://doi.org/10.5281/zenodo.10730751</ext-link></mixed-citation></ref>
<ref id="r9"><mixed-citation>Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., &#x0026; Pedreschi, D. (2018). A survey of methods for explaining black box models. <italic>ACM Computing Surveys</italic>, <italic>51</italic>(5), Article 93. <ext-link ext-link-type="doi" xlink:href="10.1145/3236009">https://doi.org/10.1145/3236009</ext-link></mixed-citation></ref>
<ref id="r10"><mixed-citation>Harris, L. A. (2023). <italic>Generative artificial intelligence: Overview, issues, and questions for congress</italic>. Congressional Research Service. <ext-link ext-link-type="uri" xlink:href="https://crsreports.congress.gov/product/pdf/IF/IF12426">https://crsreports.congress.gov/product/pdf/IF/IF12426</ext-link></mixed-citation></ref>
<ref id="r11"><mixed-citation>Lund, B. D., &#x0026; Wang, T. (2023). Chatting about ChatGPT: How may AI and GPT impact academia and libraries? <italic>Library Hi Tech News</italic>, <italic>40</italic>(3), 26&#x2013;29. <ext-link ext-link-type="doi" xlink:href="10.1108/LHTN-01-2023-0009">https://doi.org/10.1108/LHTN-01-2023-0009</ext-link></mixed-citation></ref>
<ref id="r12"><mixed-citation>McDonald, A. (2020, February 20). Product management&#x2019;s three thinking hats. Three concerns to bear in mind, three roles to play. <italic>Medium</italic>. <ext-link ext-link-type="uri" xlink:href="https://falkayn.medium.com/product-managements-three-thinking-hats-e1a3df00d9d1">https://falkayn.medium.com/product-managements-three-thinking-hats-e1a3df00d9d1</ext-link></mixed-citation></ref>
<ref id="r13"><mixed-citation>Miller, T. (2019). Explanation in artificial intelligence: Insights from the social sciences. <italic>Artificial Intelligence</italic>, <italic>267</italic>, 1&#x2013;38. <ext-link ext-link-type="doi" xlink:href="10.1016/j.artint.2018.07.007">https://doi.org/10.1016/j.artint.2018.07.007</ext-link></mixed-citation></ref>
<ref id="r14"><mixed-citation>Payette, P., &#x0026; Barnes, B. (2017). Teaching for critical thinking: Edward de Bono&#x2019;s six thinking hats. <italic>The National Teaching &#x0026; Learning Forum</italic>, <italic>26</italic>(3), 8&#x2013;10. <ext-link ext-link-type="doi" xlink:href="10.1002/ntlf.30110">https://doi.org/10.1002/ntlf.30110</ext-link></mixed-citation></ref>
<ref id="r15"><mixed-citation>Sangaran Kutty, V. S., &#x0026; Eileen, L. (2016) Hats off to de Bono: Innovatively enhancing presentation skills in the ESL classroom. <italic>The Journal of Macro Trends in Social Science</italic>, <italic>2</italic>(1), 22&#x2013;40. <ext-link ext-link-type="uri" xlink:href="https://macrojournals.com/assets/docs/3SS21Ma.14540029.pdf">https://macrojournals.com/assets/docs/3SS21Ma.14540029.pdf</ext-link></mixed-citation></ref>
<ref id="r16"><mixed-citation>The National Committee for Research Ethics in Science and Technology. (2019). <italic>Statement on research ethics in artificial intelligence</italic> (1st ed.). The Norwegian National Research Ethics Committees. <ext-link ext-link-type="uri" xlink:href="https://www.forskningsetikk.no/globalassets/dokumenter/4-publikasjoner-som-pdf/statement-on-research-ethics-in-artificial-intelligence.pdf">https://www.forskningsetikk.no/globalassets/dokumenter/4-publikasjoner-som-pdf/statement-on-research-ethics-in-artificial-intelligence.pdf</ext-link></mixed-citation></ref>
</ref-list>
</back>
</article>
