When Artificial Intelligence Enters the Campus
There is a major change underway on our campuses, but it arrives without much fanfare. It does not always manifest in the form of new buildings, new laboratories, or new curricula.
The change emerges more quietly: on students’ laptop screens, at lecturers’ desks, in administrative offices, and gradually within decision-making systems. The name of this change is Artificial Intelligence (AI).
Today, AI is no longer merely a seminar topic or a futuristic discussion theme. It has become part of everyday academic life. Students use it to summarise readings, explain concepts, generate ideas, or draft initial outlines.
Lecturers utilise it to prepare materials, create questions, or design feedback. At the institutional level, AI is beginning to be used to read learning patterns, accelerate services, and help interpret institutional data.
The scale is not insignificant. The HEPI 2025 survey of 1,041 full-time undergraduate students showed that 92 per cent of respondents had used at least one AI tool, and 88 per cent used Generative AI such as ChatGPT to assist with assessments. At almost the same time, the UNESCO 2025 survey, which gathered 400 responses from academic networks in 90 countries, indicated that nine out of ten respondents had used AI in their professional work.
However, only 19 per cent stated that their institutions already had formal AI policies, while 42 per cent were still in the process of drafting guidelines. In other words, AI adoption is moving very quickly, while its governance often lags behind. This, in my view, is where the most important questions begin.
In higher education, the issue does not stop at what AI can do. The more fundamental question is: what must be preserved when AI begins to be used widely?
That question is important because a campus is not merely a place where technology is used. A campus is a space where reasoning is formed, integrity is tested, thinking habits are trained, and intellectual responsibility is cultivated. Higher education is not only a place where knowledge is produced, but also where values are nurtured.
Therefore, AI on campus should be positioned as a tool that expands human capabilities, not replaces human responsibility. It can help lecturers identify patterns of student learning difficulties.
It can help researchers accelerate initial explorations. It can also help campus leaders understand data more quickly. However, final decisions, moral judgements, and academic responsibility must remain in human hands.
Because technology can calculate, but it has no conscience. Technology can provide answers, but it does not bear the ethical consequences of those answers. At that point, humans must not step back.
The problem is that AI is never entirely neutral. It learns from data, and data can be biased. It works with models, and models can be flawed. It also often produces outputs that sound convincing, even when the underlying reasoning is fragile. Therefore, the risks of AI on campus are never purely technical. They can extend into areas of ethics, justice, privacy, accountability, learning quality, and even trust in the institution.
Imagine some seemingly simple scenarios. A system helps read students’ essays but is subtly more favourable to certain writing styles. A predictive model labels a group of students as “at risk,” and that label gradually influences how the institution views them.
A lecturer uses AI to assist with grading but cannot explain the logic behind the results. Students may become accustomed to obtaining instant answers without sufficiently experiencing the intellectual struggles that are the essence of education.
These concerns are not empty speculation. In the same UNESCO survey, one in four respondents stated that their universities had faced ethical issues related to AI, from student dependence on AI to authorship disputes and bias in research.
This is where the problem lies: the main issue is not the existence of AI itself, but the possibility that the speed of technology adoption outpaces governance maturity.
Campuses, therefore, cannot settle for merely having access to technology. Campuses must have direction in using technology. There must be clarity on the purposes for which AI is used, the contexts in which it is appropriate, which data may be processed, who is responsible when the system errs, and how its use is monitored over time.
But governance is not complete with just writing guidelines. Documents are important, but culture is far more determining. Lecturers, students, and support staff need adequate AI literacy: understanding its limitations, habitually critically checking outputs, being sensitive to ethics, and disciplined in safeguarding data. Without that, campuses may have good rules on paper but be fragile in daily practice.
At this point, data discipline becomes extremely important. In the AI era, data is not merely raw material for technology, but also an ethical issue.
Campuses need to clearly distinguish which data is safe to process, which is sensitive, which channels may be used, who has the right to access, and how usage traces are recorded. Once student data, research documents, or institutional information enters an insecure system, what is at stake is not only privacy, but also trust.
Another often overlooked aspect is justice. Justice in AI is not only about unbiased algorithms, but also about who has access, who is left behind, and whether this technology expands learning opportunities or deepens inequalities. Higher education should be a space for social mobility. Therefore, it must not allow new technology to widen old divides.
In the end