Shadow Infrastructures: ChatGPT and Child Protection Practice in Australia

Details

Generative Artificial Intelligence (GenAI) tools such as ChatGPT are rapidly entering workplaces, yet their use in frontline human services remains largely undocumented and weakly regulated. Drawing on an investigation into a child protection worker’s informal use of ChatGPT, this Forum article provides insight into how GenAI is already shaping frontline statutory practice in Australia. The case illustrates the emergence of shadow infrastructures: unofficial technological practices that develop beyond organisational oversight. Findings show how a worker used ChatGPT to, in brief, streamline administrative tasks and assist with professional writing in the context of heavy workloads and system strain. However, this undisclosed use introduced significant risks. This case study represents broader policy challenges: while GenAI may offer efficiency gains and cognitive support in high-pressure domains such as child protection, undisclosed use risks distorting statutory risk assessments, weakening professional discretion, and displacing clear lines of human responsibility in decisions with consequences for children and families. The article argues that Australian social policy must move beyond reactive bans towards proactive, co-designed governance frameworks developed with workers, service users, and professional bodies. Such an approach is essential to ensure that GenAI strengthens, rather than undermines, the ethical integrity of child protection and human services practice.

keywords: generative artificial intelligence; ChatGPT; child protection; social policy; data governance; professional discretion; human services; Australia

  • with: Joel McGregor (lead), and Caleb Lloyd
  • year: October 2025 — ongoing