Vice-Chancellor’s Welcome Email Returns 100% AI Prediction

Vice-Chancellor’s Welcome Email Returns 100% AI Prediction

Message “was not written with AI, but rather checked using AI software”

Critic Te Ārohi recently received a news tip from an anonymous student that ran Vice-Chancellor Grant Robertson’s welcome email through GPTZero (Model 4.2b). The message was sent to all students at the start of the year, and had the subject-line “Student update: 25 February 2026”. The model returned a 100% AI-generated prediction on the email. “The Vice-Chancellor maintains personal, human oversight in all his official communications,” a University spokesperson told Critic. “The AI Governance Policy expects the same of all University staff.” 

The student explained to Critic that they took to running the message through AI prediction after noticing it “contained several linguistic structures highly associated with AI use.” According to the student, the welcome message contained classic “it's not X, it's Y” structure commonly used by generative AI language models, and em dashes. For example, drawn straight from G Rob’s welcome: "The people who live around you aren’t just locals — they’re your neighbours.”

The prediction model, GPTZero, works by analysing text for burstiness (variation in sentence structure) and perplexity (how predictable text is to a language model). Real world, independent testing of GPTZero shows it has about 90% overall accuracy, with lower accuracy on mixed AI/human content, humanised text and short texts. Therefore, AI-use prediction is not necessarily AI-use confirmation.

After getting the 100% AI-generated prediction, the student filed an Official Information Act (OIA) request with the University. As the University of Otago is a Government entity, they are legally obliged to provide (most) information within 20 days of a request. The ākonga asked whether AI was used in drafting the message, as well as requesting the transcripts of the AI conversation and the University’s position on this type of AI use. The OIA was subsequently shown to Critic.

“The Vice-Chancellor’s message was drafted by a member of the University’s Communications team, and an approved internal AI system was used to suggest improvements, which were then accepted or rejected by the Communications team member,” the OIA states. “The message was subsequently amended and approved by the Vice-Chancellor.”

According to a University spokesperson, the approved internal AI system in this case was Microsoft Copilot. “It is approved because it operates within the University’s existing information security and data governance settings.” 

In terms of policies which govern AI use within the University, the OIA noted the AI Governance Policy, which states that “AI systems must be used to augment human capability, with human judgment and oversight remaining central.” Additionally, the OIA noted the University’s Staff Use of AI Systems Policy and Procedures, which will be referenced in the AI Governance Policy and are in the process of being finalised.

This Staff Use Policy is expected to state that “AI System use must be disclosed where it significantly contributes to a work product, or where required by research ethics approvals, funding bodies or publishers, or where stakeholders would have a reasonable expectation of disclosure [...].” A University spokesperson told Critic that the Staff Use policy would be finalised “[w]ithin weeks”.

Students tend to be considered stakeholders of tertiary institutions. At the University of Otago, the University Council is the governing body, which has a membership consisting of elected, appointed and co-opted members representing key stakeholders, including “alumni, students and staff.” 

When asked if the welcome message’s use of AI, addressed to all stakeholder students, would be a future candidate for that “reasonable expectation” of AI use disclosure once the AI Staff Use policy was finalised, the short answer was ‘no’. A spokesperson for the University told Critic that “[u]sing AI to refine and improve a personal communication with full personal, human oversight would not give rise to a reasonable expectation of disclosure.” 

In this case with the welcome email, the email was originally drafted by a Communications staff member with an internal AI tool used “solely to suggest refinements with editorial judgement retained throughout.” The email was then reviewed and amended by the Vice-Chancellor. 

Despite this, the anonymous student who turned in the tip was not pleased. “It seems like the [Vice-Chancellor] who makes $700k a year to engage with the university community, outsourcing his once-per-semester message to AI, fits the brief.” When approached with this comment, a spokesperson told Critic that this argument “misrepresents the facts”. 

“The Vice-Chancellor is constantly involved in engagement with students and staff. The message is only one example of this.” The spokesperson reiterated that the welcome message “was not written with AI, but rather checked using AI software”, with the Vice-Chancellor making subsequent amendments. “For clarity, the Vice-Chancellor’s remuneration is set by the Public Service Commission and is not $700,000.”

This article first appeared in Issue 7, 2026.
Posted 4:34pm Saturday 11th April 2026 by Hanna Varrs.