Aql vs. AI - The screen time debate we are still getting wrong
The headlines are bad. Not bad in a sensational, click-worthy way; bad in the way that reveals how much harm has already been done, quietly, over decades. Tech companies leveraged internal social governance programs to "uplift" communities while simultaneously branding their products to young children. Edtech companies sold student data. Educators were handed devices but never taught about the dangers of early online exposure or how to help a child delete a digital footprint that will follow them forever.
Digital trash is never deleted. And yet, we never taught online executive functioning skills the way we taught children to manage a paper planner.
"The problem isn't just how we use technology. We genuinely need to use it less, especially with children."
The prevailing response to all of this has been a familiar one: nuance. People insist it is not about less technology, just better technology use. I disagree. I think we do need to use technology less, particularly in the early years. The question is not only how we hand children a screen, but whether we should be handing it to them at all, at that moment in their development.
How we have been responding and why it is not working
Our collective response has been scattershot at best. Consider what we are doing simultaneously right now:
Banning phones in schools
Mandating 1:1 devices as the standard across every grade, K through 12
Moving standardized tests exclusively to digital platforms
Deploying AI detectors to flag student work as cheating
These policies do not form a coherent philosophy. They are reactions — each one produced in isolation, often in response to the last crisis, without a governing vision of what we actually want children's relationship with technology to look like.
Some educators currently in classrooms earned their degrees when everything was done with paper and pen. We cannot expect them to intuitively understand the harms of early digital presence or to navigate deletion of a student's online profile when those concepts simply were not part of their training. The gap in preparation is real, and it is not their fault.
The Gen X and Millennial failures should not become Gen Z's burden
There is something deeply unfair about letting the poor planning of one generation become the liability of the next. The fact that educators, policymakers, and tech companies were underprepared does not mean we have to keep making the same mistakes, nor does it mean we should overcorrect by throwing out everything digital entirely.
We can be strategic. We do not have to throw out the baby with the bathwater.
"The first time a child learns to write should be on paper. They should not sit in front of a computer until they can form full sentences."
Think about what previous generations took for granted: entering middle school meant being explicitly taught how to use an assignment notebook, how to build good homework habits, and how to protect your sleep. These were considered foundational skills. When digital natives arrived in middle school, those same skills were never transferred into the digital environment. Nobody taught them the equivalent of a planner — they were just handed a device.
What a better path actually looks like
Sequence matters
Introduce technology in developmentally appropriate stages. Writing begins on paper. Reading begins with physical books. The transition to screens should happen only once a child has built the cognitive scaffolding that screens will later support — not replace.
Teach real problems, earlier
Get students into internships and real world problem solving earlier than we do now. Let them use the digital world for what it is actually good at — research, co-design, collaboration — and grade them on the journey, not just the final product. Learning how to learn, digitally, is a skill in itself.
Set structural guardrails
Establish clear limits on how many days per week each subject can use screens, so that total daily screen time stays within a healthy threshold. This requires schools to actually track that data, which means collaborating with technology teams to build the infrastructure to do it. It is doable. We just have not treated it as a priority.
Humans vet everything
No content should be placed in front of students that has not been screened by a person. AI-generated content should be treated like a sandwich: a human inputs the request, AI produces something, and a human reviews it before it ever reaches a child. The human is present on both ends. That is not optional.
Start with the why
Before any technology enters a classroom, there should be a simple litmus test. Ask: Is this device being used as a babysitter? Will it genuinely ease the cognitive load for this student, right now? Or is it there purely to increase engagement metrics?
If the honest answer points to any of those last two, do not use the technology. Engagement for its own sake is not education. Ease of management is not a pedagogical goal.
Aql, the Arabic concept of natural intelligence, reason, and wisdom, reminds us that the mind is not a machine to be optimized. Children are not users to be onboarded. When we design their education, we should be designing toward human flourishing and reaching for a screen only when it genuinely serves that.
The tools we give children shape the minds they develop. Let us be deliberate about that — one decision, one classroom, one policy at a time.
Author: Farheen Beg Mohammad, Chief Education Consultant, ILMA Consulting