05 — Safeguarding and AI Ethics Appendix (v2)
Status: Stub. Boring and beautiful. Has to be substantive given the participant population (neurodivergent children).
Purpose
The original draft has one bullet about AI safety under Risks. That’s not enough. v2 needs an explicit appendix that:
- Maps every operational decision to the relevant clauses of the NDIS Code of Conduct
- Specifies the safeguarding posture for working with neurodivergent minors (the primary participant group)
- Defines the AI ethics framework for any AI-powered tool deployed to a participant
- Documents the data handling, consent, escalation, and exit story in detail sufficient for a plan manager to read and trust
Sections to write
1. NDIS Code of Conduct compliance
Map each of the seven elements of the Code (see
synthesis/ndis-context.md) to operational practice in this service.
Concrete: how does the practice respect privacy? How does it provide
supports safely? What’s the integrity story? How does it prevent and
respond to harm?
Don’t restate the Code; show how the practice implements it.
2. Worker screening and clearances
- NDIS Worker Screening Check status for James and Adrian (both should have it before launch)
- State/territory Working with Children Checks (NT for James, TAS for Adrian)
- Renewal and verification cadence
- Disclosure to participants and families
3. Consent and assent for minors
- Guardian consent: what is asked, when, how
- Participant assent: age-appropriate explanation of what the tool does, what data it captures, what happens if they don’t want it
- Right to withdraw without prejudice or financial penalty
- How consent is documented
- How assent is checked over time, not just at intake
4. Data handling
For every data type the service touches:
- Participant assessment notes — where stored, who accesses, retention
- Configuration files for deployed tools — same
- AI tool interaction logs — this is the highest-risk category, especially if conversations with neurodivergent minors are stored
- Video/audio recordings — default policy is don’t, but if exceptions, document them
- Guardian access — how families view/correct/delete their child’s data
Reference: Australian Privacy Principles. NDIS providers handling personal information are bound by them.
5. AI ethics framework for deployed tools
This is the section that has to be substantive, not a wave at “Failure First.”
For any AI-powered tool deployed to a participant:
- What the tool does and does not do (capability boundaries)
- What data the tool sees (and doesn’t see)
- What happens when the tool encounters something outside its design scope (escalation, fail-safe, who gets notified)
- Logging and review cadence — how often does Adrian review tool behaviour? What triggers an out-of-cycle review?
- Failure modes catalogued before deployment (the actual Failure First application)
- Specific commitments around AI companion patterns: no fostering of parasocial dependency, no replacement of human connection, no collection of conversational data beyond what’s necessary for the participant’s stated goal
This section should reference SPARK’s existing safety architecture where applicable (see https://spark.wedd.au — privacy filter, QA gate, quiet mode, school-hours suppression, salience thresholding) but distinguish between SPARK as an existing personal project and any deployed configuration as a separate, reviewed instance.
6. Escalation and exit
- What happens if a tool isn’t working for a participant — exit procedure
- What happens if safety concerns arise — escalation path
- What happens if a complaint is made — NDIS Code of Conduct complaints process (Adrian and James both need to know this cold)
- What happens at end of engagement — data retention, decommissioning of deployed tools, family handover
7. Out-of-scope declarations
Explicit list of what this service is NOT:
- Not a clinical mental health service
- Not crisis support
- Not a substitute for allied health assessment (OT, SLP)
- Not a 24/7 monitoring service
- Not behavioural intervention
- AI tools deployed are not therapeutic devices and make no clinical claims
Out-of-scope declarations protect the practice and the participant.
Open dependencies
04-roles-and-scope.md(whose Worker Screening covers what)06-spark-deployment-model.md(the AI ethics framework partly depends on what SPARK-as-deployed actually is)- External advice — at least one paid consultation with someone who understands NDIS safeguarding for neurodivergent minors
- Verification of Failure First framework as a documented methodology with artefacts that can be referenced