Indonesian Political, Business & Finance News

If Social Media is Restricted, Why Not Generative AI?

| | Source: MEDIA_INDONESIA Translated from Indonesian | Regulation
If Social Media is Restricted, Why Not Generative AI?
Image: MEDIA_INDONESIA

From 28 March 2026, Indonesia enters a new chapter in protecting children in digital spaces. Social media accounts for children under 16 years on high-risk platforms will be progressively deactivated under ministerial regulation No. 9/2026, implementing Government Regulation No. 17/2025.

This step stems from recognition that exposure to harmful content, cyberbullying, online fraud, and digital addiction cannot be entirely placed on families. After nearly three decades of social media permeating society, the government finally acknowledges the dangers to children. Yet another, more advanced technology is rapidly infiltrating classrooms and study desks: Generative AI (GenAI). There is urgency in addressing GenAI: should the government regulate this technology that equally threatens Indonesia’s future generation?

SWIFT STATE ACTION

The world demonstrates how quickly governments can move to prevent technologies with demonstrable negative impacts. Australia, for instance, imposed social media restrictions for users under 16 on 10 December 2025, requiring platforms to ensure children cannot create or maintain accounts. Indonesia now follows similar policy directions with its own context and mechanisms. If social media is discussed as technology threatening society, particularly young people in formal education from primary to secondary school, should not GenAI likewise be discussed as a new, disruptive, dangerous technology?

PERSONAL TUTOR AND POISON FOR THE MIND

GenAI undeniably helps students develop ideas, understand difficult concepts, and provide rapid feedback—even transforming into a personal tutor. GenAI’s strength mirrors human cognitive work: constructing arguments, gathering evidence, checking logic, and drawing conclusions. This very strength is what causes concern.

Education is not merely seeking answers and transferring them to worksheets; it is a process of developing critical reasoning habits. GenAI dependence creates not only lazy thinkers but epistemic dependence. Students no longer brainstorm independently, no longer independently verify data, and lose the habit of drawing conclusions through reasoning, instead accepting GenAI output as final answers.

A systematic review of ChatGPT by Melisa et al. (2025) focusing on higher education contexts showed that excessive ChatGPT use can hinder motivation for self-reflection and critical evaluation of information—both core components of critical thinking. This aligns with another systematic review highlighting risks of over-reliance on AI dialogue systems: when users tend to accept AI output without testing it, analytical reasoning and decision-making abilities erode (Zhai et al., 2024).

Meanwhile, Hassen (2025) warns of cognitive offloading dangers, where thinking burden shifts to machines. The machine thinks whilst the human remains passive. This should concern education because GenAI use in classrooms contradicts educational principles valuing critical reasoning and intellectual independence.

We certainly do not want education outputs producing increasingly sophisticated GenAI through massive use whilst students become passive through infrequent thinking. We want students to think, with GenAI helping them think—not the reverse. How should government address GenAI?

TOTAL BANS ARE NOT THE ANSWER

Considering GenAI also offers benefits, total prohibition is extreme. Banning AI in education risks destroying pedagogical opportunities offering learning efficiencies. Banning GenAI widens digital divides, where as many harness AI for life advancement, students risk technological lag causing alienation from an AI-saturated society. Therefore, government consideration should focus on permissible AI involvement degrees.

FOUR SCHOOL POLICIES

First, national regulation on GenAI use in schools is needed, establishing AI as a learning partner, not a thinking replacement. Given GenAI benefits, regulation is necessary as it threatens critical reasoning. For instance, GenAI might serve brainstorming partners, with students then required to develop reflective reasoning on topics or independently analyse data.

Second, schools need transparency disclaimers when learning involves AI—for example, students disclosing prompts or GenAI-sourced arguments in assignments. Transparency enables teachers to monitor student learning.

Third, regulations delaying GenAI access to certain education levels are needed. For primary school, government might consider strict restrictions, not because children shouldn’t use technology, but because this phase builds fundamental reading, writing, and reasoning habits. This should occur traditionally through teacher guidance and repetitive practice.

Fourth, government should promote assessment designs more resistant to GenAI substitution—shifting evaluation from written assignments to oral presentations, class debates, or contextual tasks requiring local experience. These formats make learning participatory and resist machine replacement.

PROACTIVE, NOT REACTIVE STANCE

Governments should adopt proactive rather than reactive positions towards technology affecting education quality and national intellectual capacity.

View JSON | Print