06 — SPARK Deployment Model (v2)
Status: Stub. The single biggest unresolved technical question in the proposal. Until this is answered, several other v2 sections can’t be finalised.
The question
The original proposal claims SPARK as “direct precedent for non-coercive AI companion design for neurodivergent children.” SPARK is real and public (https://spark.wedd.au). What’s not yet defined is what a SPARK-derived deployment for a paying NDIS participant actually is.
What SPARK is, today
A working personal project:
- Hardware: SunFounder PiCar-X + Raspberry Pi 4 + Hailo accelerator
(separate Pi 5) + HifiBerry DAC + Frigate cameras + Home Assistant
- Ollama on M1 over LAN
- Software: 10 systemd services sharing a
session.jsonwhiteboard, three-tier LLM fallback (Claude Haiku → Ollama LAN → Ollama local), three personas (SPARK / GREMLIN / VIXEN), 450 automated tests - Built with Obi (named co-designer, AuDHD, age-appropriate framing)
- Persona prompt is calibrated to AuDHD profile: declarative language, transition warnings, quiet mode during meltdowns, RSD-aware, monotropism-aware
- Privacy posture: camera streams stay on LAN; only mood and last thought are publicly visible; PIN auth for tool execution; rate limiting; atomic writes
- Operational cadence: 60-second awareness loop, 5-minute reflection, 2-minute spontaneous-speech cooldown, school-hours and bedtime suppression
- Self-evolution: SPARK introspects and proposes code changes as PRs
This is a one-of-one. It exists in one house, with one child, who is the co-designer.
What “deployment” could mean
Three plausible models, with very different implications:
Model A: Bespoke per participant
Adrian designs and builds a custom SPARK-derived configuration for each participant. Hardware sourced and assembled per engagement. Persona prompt adapted to the specific child’s profile. Months of iterative development.
- Pros: Most faithful to the SPARK philosophy of designing the tool around the person
- Cons: Catastrophically incompatible with Adrian’s 5–10 hr/week ceiling. Each deployment takes weeks of full-time work.
- Verdict: Not viable as the default offering. Could be a premium tier offered rarely.
Model B: Configured variants of a base architecture
Adrian maintains a SPARK base architecture. For each participant, Adrian configures variant — adjusts persona, sensors, data scope, review cadence — but doesn’t rebuild. Hardware kit is standardised. Configuration time: maybe 5–10 hours per participant.
- Pros: Fits Adrian’s hours. Honest about what’s bespoke (the configuration) versus what’s productised (the architecture).
- Cons: Requires SPARK to be productised first — a real software project that Adrian would need to lead. The proposal can’t credibly claim this exists today; it’s a roadmap item.
- Verdict: Plausible 12–18 month direction. Not the v1 offering.
Model C: SPARK as inspiration only
The proposal references SPARK as evidence that Adrian has thought deeply about non-coercive AI for ND children. The deployed tools in the practice are NOT SPARK-derived. They’re whatever’s appropriate for each participant — could be off-the-shelf communication apps, AAC tools, scheduling apps, sensory accommodations, with Adrian’s safety review applied.
- Pros: Honest about the gap between SPARK-as-personal-project and deployable service. Doesn’t overcommit Adrian.
- Cons: Reduces the proposal’s differentiator. The “AI safety background” credential remains, but the SPARK-as-evidence claim becomes weaker.
- Verdict: Probably the right v1 framing. SPARK demonstrates Adrian’s thinking; the practice deploys whatever’s right for each participant; AI tools are reviewed via Failure First before deployment.
Adrian’s call
This is Adrian’s decision, not Claude’s and not James’s. Three factors to weigh:
- What can actually be delivered in v1. Honest answer is probably Model C.
- What credit SPARK should get in the proposal. Adrian built it. It’s good. It just isn’t yet a product.
- What involving Obi at all means. SPARK is publicly Obi’s robot too. Adrian needs to be comfortable with Obi being indirectly part of the proposal context, even if Obi isn’t named in the proposal itself.
If Model C is chosen
The proposal language should be careful:
- “Adrian’s AI safety background includes the SPARK project, an independent research deployment…”
- “…not a deliverable of this service”
- “…all AI tools deployed to participants are subject to Failure First review regardless of origin”
The proposal should NOT say:
- “We deploy SPARK to your child”
- “Our AI companion offering is based on SPARK”
- “SPARK is included in our service”
If Model B is chosen (later)
A separate productisation roadmap document is needed before this section can be confidently drafted. That’s a 6–12 month project of its own and probably has different commercial structure (licensing, support contracts, hardware cost recovery) than the hourly NDIS billing model.
Open dependencies
- Adrian’s call on the model
- Obi’s awareness/comfort if SPARK is referenced in any external business document at all
05-safeguarding-appendix.md(the AI ethics framework needs to cover whatever Model is chosen)04-roles-and-scope.md(SPARK-derived work changes Adrian’s hours picture significantly)