ChatGPT in Spotlight - Wittenborg Team Comments

04.06.2023
ChatGPT in Spotlight - Wittenborg Team Comments

AI literacy must be part of higher education, argues Wittenborg team

In schools and universities of applied sciences around the world and in the Netherlands, students and educators are being confronted with emerging AI technologies. While utilising AI-text generation tools, such as ChatGPT, can arguably restrict intellectual development by serving as ersatz thought in place of true knowledge of the curriculum, AI can also be implemented into higher education in order to benefit students. This is the argument of four members of the Wittenborg team: Zijian Wang, Senior Lecturer at Wittenborg of AI in Business and CEO of Honey Badger Technologies, Hanna Abdelwahab, Wittenborg's Education Support Administrator, Dr Dadi Chen, Wittenborg Associate Professor of Applied Sciences & Programme Coordinator, and Dr Rauf Abdul, Head of Business School. Together, Abdelwahab, Chen and Abdul have researched the role of AI in education, producing the paper, Business students’ perceptions of Dutch higher educational institutions in preparing them for artificial intelligence work environments. As the title suggests, the paper looks into how relevant to the modern working world business students considered their curriculum within the context of AI literacy. Notably, the paper found students did not believe their education sufficiently prepared them for the working world, as they felt they did not learn enough about AI tools as well as the ethics and applications.

According to Dr Dadi Chen, educators must incorporate AI into their curriculum to prepare students for a changing work landscape. “With the increasing attention of ChatGPT, our new lecturers, Dr Hind Albasry and Dr Cha-Hsuan Liu, and myself, want to continue the exploration of AI in higher education, especially the impact of AI in pedagogy and learning, and the cultivation of critical thinking in the background of AI and ChatGPT,” Chen says. He notes that AI tools have been a hot topic among the Wittenborg team. “We had a lot of discussion about it in the Education Board and Graduation & Examination Board (GEB) meetings. In my opinion, we as educators need to consider how we can prepare students for the future jobs that integrate more and more AI. Comparing with their necessary AI literacy, what is more important is their critical thinking ability,” he argues. “In all my research methods classes, I have (re)introduced AI text analysis tools to students. They can use them to search for research ideas, or to analyse text materials in qualitative research. However,” Dr Chen warns, “I remind them to cross check in each of the actions with facts obtained from empirical studies and their own experiences and observations. They need to understand the roles of AI and themselves at work, and never lose their own judgement to AI.”

Hanna Abdelwahab believes that AI holds opportunities for education. She also flags that by ignoring AI and its already noticeable impact, educators underserve their pupils, especially as ethical concerns surrounding AI technology and its applications continue to emerge. “The importance of AI in a higher education institution's curriculum – or any level of education – cannot be underestimated, because AI has actually caused systemic changes to education systems globally,” she points out. “To thrive in an AI-enabled world and eco-system, students must be taught the necessary skills and competencies, whether technical skills or soft skills. But a more pertinent issue is not to negate the significance of teaching students the morality and ethics behind AI technologies. As AI takes a bigger decision-making role around us, ethical concerns will start to mount,” Abdelwahab warns. “Undoubtedly, there is great promise in AI, but there is also potential for peril in our lives.”

Rauf Abdul also affirms that educators at Wittenborg are happy to embrace the “AI revolution,” but warns students as well as staff of potential ethical or legal issues in using software like ChatGPT. He is proud to say that, at present, there are no serious problems with students using AI-generated text, which he believes is testament to Wittenborg's robust academic policy. “But in order to be ahead of the game, we would like to bring in additional measures or additional considerations where we would encourage the use of AI-based systems in a legal, ethical way which can help both faculty and students in improving the quality of their work,” he says.

“When it comes to higher education there could be the possibility of or a potential misuse of AI where a student or a faculty member could be using these AI-based systems to generate some reports or certain things to submit as their own. Because at this moment, plagiarism detection software may not be able to detect all AI-based text,” he laments. “At the same time, we think that this is a revolution. This is a kind of innovation. We would like to see faculty and students benefiting from these advanced systems, but in an ethical and fair, transparent and legal way. So, we would not tolerate any staff or student using any output provided by AI-based software as their own report or own arguments and so on,” he advises. “For that reason, we would like to discourage any unethical use where they will copy those materials provided or produced by AI-based algorithms.”

However, he expresses that he believes there is a “good possibility” that students and faculty can implement AI technology during certain instances, such as using it to support their research and learning in such a manner that does not include copying AI-generated text verbatim. He also warns that citing ChatGPT in research papers, as certain authors have controversially done, will not be tolerated. “We would say the standard requirements when it comes to referencing and citations should be implemented. If we look at those requirements, this will not be possible. You cannot copy a paragraph or half a page or a page and say, ‘Oh, this text was generated from ChatGPT’.” Still, in certain cases, Abdul notes that AI-generated text may be attached as an appendix, but not included in the main assignment. Students who are considering doing this must of course discuss the matter with their professors directly.

AI detection and Wittenborg's Academic Misconduct Policy

At present, professors and lecturers at Wittenborg are beginning to implement the AI detection mechanism on Turnitin for students' assignments. Notably, Wittenborg does not discourage every use of AI by students and staff. However, it must be used in an effective and ethical manner which is conducive to their academic and professional development. Despite this, it should be known that copying/pasting AI-generated texts for essays or assignments is still a violation of Wittenborg's Academic Misconduct Policy as stated clearly in its Education and Examination Guide (EEG). Academic Misconduct is defined in the EEG as:


“Cheating – Using or attempting to use crib sheets, electronic sources, stolen exams, unauthorised study aids in an academic assignment, or copying or colluding with a fellow student in an effort to improve one's grade.” - EEG, Part 5b – Plagiarism Policy, Page 4.

When a student uses AI-text verbatim, they are committing an act of academic dishonesty, as they are submitting a work that was not produced by themselves, thereby failing to demonstrate any expertise on their own part as a student. The student does demonstrate a lack of integrity as an academic and professional by submitting only a pretence of original thought and conceptual understanding. If a student’s submission is thought to be AI-generated, they will receive a fail grade and the student will be reported to the GEB for disciplinary action. According to the EEG, the penalty at Wittenborg for academic malpractice, including submitting AI-generated texts, could range from an official warning, a lowered or fail grade, being made to re-do the assignment, being barred from taking an exam for up to a year or being expelled from the programme entirely.

Zijian Wang cautions teachers that, at present, AI detection tools are not infallible, and professors must keep an open mind when a student's submission is flagged as AI-generated. “If a student uses grammar checking tools, such as Grammarly, and then copies the revised text, it will be recognised as AI-generated text,” Wang highlights. “If a student uses translation tools, such as Google Translate, to translate from his or her native language to English, it will be recognised as AI-generated text. If a student uses an auto-generated structure in MS Word, simple texts like 'Table of Contents' will be recognised as AI-generated text.” Wang continues, “These three scenarios have nothing to do with cheating, in my opinion, and I suspect there are more scenarios which could cause a misjudgement by Turnitin.” He recalls that recently, he has dealt with multiple students who have been unduly flagged as submitting AI-generated text for such reasons.

WUP 04/06/2023
by Olivia Nelson
©WUAS Press