Generative AI and Graduate Education, Supervision, Mentoring

While much of the instantaneous conversations and immediate instructional development work since the November 2022 onset of Generative AI across post-secondary education understandably has focused on teaching and learning writ large, undergraduate-level and coursework-based instruction and strategy seems to be the predominant focus and area of inquiry. Graduate-level education - and specifically graduate supervision and mentoring - are crucial research and pedagogical domains affected, too, particularly in and across universities and academia.

On June 22nd, 2023, the Faculty of Graduate Studies and Research (FGSR) together with the Office of the Provost met with U of A graduate supervisors for a robust conversation about the impacts of Generative AI on graduate education. Four key topics arose, summaries of which are captured here below: AI Literacy Development - For Faculty and Graduate Students; AI Educational Development - For Faculty and Graduate Students; Ethics, Equity, and Bias; Re-Imagining Graduate-Level Assessments. Please also see FGSR’s Resources for Faculty and Staff for more information and support.

AI Literacy Development - For Faculty and Graduate Students

As a demographic, faculty members who supervise and mentor graduate students are feeling very new to AI - and many have little to no experience with it as a technology at all. And even though our students are much more familiar with it, comparatively, they are asking questions. So, what does “AI literacy” as a foundational concept look like in graduate education? For both faculty and for graduate students? All of us in graduate-level research, teaching, learning, supervising, and mentoring need help, support, and partnerships with all versions of literacy tied to Generative AI - how to use it, how to discern the content and credibility, what are the creative ideas, to what extent do the artificially-intelligent generated answers apply to the questions you are asking and the frameworks and methodologies you are using, etc.?

New and ongoing education for faculty is very important here, yes: there is a need and desire in this area for continuous programming, such as workshops and bootcamps in tandem with one-time offerings. And faculty and students need to continue to learn and keep learning here as we go: we all really are in this together, and we need ongoing conversations both separately and together. At the basic/basest level, information sessions for faculty and for graduate students are an integral starting place for developing capacity and stamina with our AI literacies, and key foundational moments (such as, for examples, Graduate Professionalization Seminars and New Instructor Orientations) need intentional developmental pieces together with an inventory of resources to visit and revisit on the new and best perspectives on versions, uses, approaches, challenges, etc. with Generative AI.

This will help everyone, supervising/mentoring faculty and graduate students together, to hone proper and responsible uses of Generative AI (i.e., not policing it) so that we can all move from information literacy to data literacy and use this critically and creatively as well as intelligently and ethically.

AI Educational Development - For Faculty and Graduate Students

As we build our capacities, staminas, and literacies with Generative AI at that above-outlined foundational level, in and for graduate education, supervision, and mentoring, we will also need to link this to and advance our educational development with AI for faculty and graduate students. It really is increasingly - and importantly - becoming a question of “How do we use this?” instead of the binary “Should we/shouldn’t we use this?”

Learning Communities across campus on Generative AI in Graduate Education, Supervision, and Mentoring would be great networks and resources for faculty to go to to seek advice, collaborate on research, share challenges, stumbles, celebrations, etc., and just talk about Generative AI and their graduate-level teaching, supervising, and mentoring. These could and should be all discipline-/department-specific as well as discipline-agnostic and whole-campus. Self-paced, self-directed learning, too, about Generative AI will be key: more formal access to educating and continuing to educate and upskill ourselves about Generative AI (generally, scholarly, and pedagogically).

As faculty and graduate supervisors and mentors figure out how to proceed and how to implement Generative AI in graduate-level research, teaching, supervision, and mentoring, we absolutely need to engage our students in setting those rules, expectations, and guidelines with us. Graduate students' voices, perspectives, and lived experiences must be involved in these discussions around all of this, and graduate student representatives must be on committees, micro- to macro- level committees, doing and establishing this work. A paradigm shift here across all of this: graduate supervisors and mentors doing the learning and training together with the students they are supervising and mentoring so that the complexities of AI can be jointly learned from the harmonious perspectives joining together.

Ethics, Equity, and Bias

Generative AI tools are not concerned with truth, and they have been proven to fabricate (or, in tech jargon, “hallucinate”) scholarly quotations, scholarly citations, and even whole scholars themselves. As scholars ourselves, together with and for our emerging scholars whom we are supervising and mentoring, we need to have our students understand this by engaging in this conversation with them. As emerging scholars, it is the human side they can and must bring to what they are researching, writing, creating, and using, and we all need to stand back and appreciate that while these tools don’t have any concern for truth, as academics, we are concerned with truth.

And with truth comes bias. Biases of all kinds. From the technological standpoint with Generative AI, the intelligence bias goes both ways: from us as users in the prompts we ask and input, and from the artificially-intelligent generated prose coming back at us as users. The dominance of English language fluency together with the heteropatriarchal system of academia privileges in Generative AI to yield discriminative data that, without careful critique and metacognition, heavily biases results to white, Western, male, highly-cited scholars without identifying, or offering, new trends, voices, and directions.

Issues of equity also arise in terms of the scalar differences between free versions of Generative AI tools and their paid-for, for-profit counterparts. Will the University provide a paid version for faculty and students to use? What are the affordances, constraints, drawbacks, back-tracks, etc. of the University doing either? And how does this reframing our ideas of plagiarism and what (now) constitutes not just plagiarism but academic integrity, cheating, and (in)equity and (in)justice for academic citizenship in a world with AI impact all of us? Many, many more questions here than answers and solutions.

Re-Imagining Graduate-Level Assessments

For graduate-level research, teaching, supervising, and mentoring, how we assess and what the genres of assessments are for demonstrating what our graduate students have learned and know are critical. A key attribute of graduate education is the need to critique and criticize content and methods: this also applies to Generative AI. Not just cognition but metacognition are more crucial to emerging scholars and scholarship today than ever.

We need to have and embody more comprehensive discussions on how original ideas get expressed in new ways across graduate education because while Generative AI can give us ideas, when we have novel ideas, how these novel ideas get expressed in scholarly prose, creative writing, music, images, data charts, etc. might feel like it is changed or is threatened to become changed. But the complex learning that happens in graduate-level research and teaching happens during the process of writing to get to the product of writing - “writing” as verb, as action together with (not versus) “writing” as noun, as result/output/thing. Generative AI has the potential to foreclose on some of this, but not all of this, and we can avoid short circuiting this here by re-thinking assignments and re-imagining assessments to not just be text-/prose-based. Alternative assessments - including oral assessments (like at Defenses) together with multimodal assessments - “going back” to pen and paper, sure, but not totally: which key moments, projects, pieces, at which times and for which purposes? - but also alternative and multimodal instruction and modeling. As faculty engaged in graduate education, supervision, and mentoring, how can we, in our practices and pedagogies, model alternative and multimodal approaches to graduate education? And all of this authentically: authentic assessment, authentic instruction, authentic supervision, mentoring, training, and coaching.

And a key part of all of this is considering and critically, creatively, and affectively engaging with the lifetime academic development of our students from Undergraduate to Graduate student in that larger, longer curriculum with transitions from Bachelors to Masters to Doctorates. The degree-level attributes, skills, and knowledges our graduate students have from their undergrad levels (and that our PhD students have from their Masters levels) are not the same in this moment and will not be with each year and cohort ahead.