
The age of Artificial Intelligence (AI) is very much here. The term "generative AI" is now commonplace, with the public fascinated with how AI can actively produce content such as written and audio creations. In fact, the world is moving towards Artificial General Intelligence (AGI), whereby robots will be able to match and even outdo human intelligence. Aptly, its relationship with children (under 18 years) invites reflection and precaution.
On the one hand, AI can bring great benefits, building on the strengths of existing digitalisation. It can be a useful educational tool, helping children who face learning difficulties or disabilities, among others. It is a technology of connectivity -- it helps to facilitate communication and information dissemination. It can act as an instrument of leisure to invent games. It can promote human efficiency when dealing with repetitive tasks in the medical field.
On the other hand, AI also brings risks. It might be a tool of exploitation in relation to sexual abuse and exploitation. It is a technology of alienation used for bullying, hate speech, discrimination and violence. It lends itself to information distortion and manipulation, such as fakes and scams, propaganda and surveillance. It is an instrument of stress, replete with addiction and superficial self-validation. It is emerging as an instrument of human subjection and dejection, especially when and where it controls human lives, perhaps absolutely.
How then will the world community handle that ambivalence? The international guiding framework is the Convention on the Rights of the Child and its General Comment No.25, which covers children's rights in the digital environment, highlighting child protection.
In reality, implementation is open to a variety of orientations, bearing in mind that both AI and related responses are in a state of flux. On one front, there is the two-track situation whereby a general approach is contrasted with a more specific approach in handling the relationship between AI and children. The former is exemplified by various guidelines of a general nature, such as to protect children's privacy and safety and to highlight AI transparency, especially to help explain the pros and cons of AI to children.
The more specific approach is to target various sectors for action. Twenty-five years ago, the Online Privacy Child Protection Act of the US offered a preview. It imposed a condition related to minimum age; children under 13 cannot consent to have their data revealed. In 2025, California opted for this additional intervention. Its recent Patients Communications law states that healthcare facilities using AI must adopt clear disclaimers when there is AI-generated content. The possibility of contacting human healthcare providers must be available. On another front, there is a contrasting vision between ethical guidelines of a persuasive nature about AI utilisation and the prescriptive approach of binding rules with consequential accountability. The ethical approach has emerged from some international agencies, highlighting basic principles, such as "Do No Harm", safety and security, privacy and data protection, responsibility and accountability, transparency and explainability of AI functions.
The prime example of the prescriptive approach is the European Union (EU)'s AI Act, put into force in 2025. There is a list of prohibited practices. Social profiling, where data might be used to discriminate against people, is forbidden. Subliminal targeting of children's emotions as a kind of manipulation is proscribed. The collection of real-time biometric data for surveillance purposes is not allowed, although there might be some leeway in regard to national security. With lesser risks, the business sector is called upon to adopt Codes of Conduct as a kind of self-regulation for policing itself, subject to linking up with the EU supervisory system as a whole. Violations can lead to massive fines.
Globally, certain realities are inevitable. Where there is illegal content, for instance, child pornography, national laws already prohibit such practices and they automatically apply to AI-related actions. However, there might be differences in regard to whether children appearing in AI-generated content are real children or merely digitally generated ones. The issue is not settled internationally, although child protection groups prefer to prohibit all images of children in such situations without having to prove whether real children are involved.
From another dimension, there is the issue of how to deal with harmful content which is not illegal. For example, the mere fact that X hates Y is not necessarily illegal under international or national law. Other actions may thus be required. At present, the digital industry, especially its developers and deployers, has already adopted some tools via self-regulation to moderate content, at times through filtering. For instance, many platforms have Codes against homophobic messages, and they delete them, even if the law does not prohibit such content. This might also cover various forms of bullying and grooming of children, which might otherwise lead to discrimination or violence.
The key lies with digital and AI literacy so that the public, especially children, parents and teachers, are able to enjoy the benefits of technology safely, securely, "smartly" and sustainably. This can be helped by the AI industry by ensuring that its members are AI-literate in terms of assessing the risks as part of due diligence and mitigating them, with guardrails in place balancing freedom of expression and children's rights protection. In essence, there can be no substitute for an educated and literate public with a discerning and critically analytical mind, as well as having cognitive and affective means to protect itself from transgressions.
Families must have options for "digital detox". This would enable parents to work with children to safeguard some spaces at home and be free from technology. There need to be periods of human interaction without technology, including with leisure time. Humane activities, such as pro bono help for disadvantaged groups, need to be nurtured to generate the warmth of empathy, which no technology can replace.
Hence, the community needs "Top-Tips for Digital Detox" or "TT-4-DD" now!