01 — Service Model (v2)
Status: Stub. Bones of the original five-stage model are good; needs role re-attribution and tightening.
Purpose
Describe what the practice delivers, in what sequence, with what principles, and who does what at each stage.
Carry-forward from the original
The five-stage engagement (Assess, Design, Implement, Train, Monitor) is sound. Keep the structure. Keep the non-coercive principles. Keep the framing that “not working” is feedback, not failure.
What changes
Role attribution per stage
(Subject to confirmation in conversation with James — see
synthesis/strategic-pivot.md.)
| Stage | Lead | Specialist input |
|---|---|---|
| 1. Assessment | James | Adrian on async technical questions only |
| 2. Tool Design | Adrian | James provides participant context |
| 3. Implementation | James | Adrian remote-supports during setup |
| 4. Training | James | Adrian provides materials and documentation |
| 5. Ongoing Monitoring | James | Adrian on call for technical escalations |
Non-coercive principles — strengthen
The original list is good. v2 should:
- Cite a methodology or framework rather than asserting principles unattributed
- Add explicit consent boundaries for minors (assent + guardian consent; age-appropriate explanation; right to withdraw without penalty; feedback channels that don’t require a parent to relay)
- Add explicit protocols for what “ending a session” looks like in practice, including for non-verbal participants
Sensory and communication preferences
The draft mentions identifying these “before any device is presented” but doesn’t say how. v2 should specify the assessment instruments or approach (sensory profile screen, communication preferences inventory, accommodations checklist) — not in detail, but enough that a referrer knows what they’re getting.
Open questions
- Is the 4-week settling period before Stage 5 begins evidence-based or invented? If invented, either fix the duration or explain the rationale.
- Does Stage 5 ever transition to a lower-touch maintenance mode, or does it run indefinitely at weekly cadence? Implications for both participant outcomes and practice capacity.
- For participants where AI tools are deployed, what’s the review cadence on the AI tool itself (separate from the participant check-in)?
Open dependencies
04-roles-and-scope.md05-safeguarding-appendix.md06-spark-deployment-model.md