
AI that assists.
Data that stays your own.
We use AI as a high-precision assistance tool for psychosocial risk triage. Our architecture ensures your sensitive data never leaves its protected environment to train any third-party model.
Zero Training
Your inputs are used strictly for real-time processing and are never used to train our models or those of our third-party providers (OpenAI/Anthropic Enterprise tier).
Ephemeral Processing
Any data passed for analysis is processed in-memory and discarded immediately after the session. We maintain the record, but the processing layer is stateless.
Metadata Only
Historical pattern matching uses sanitized metadata categories (e.g., 'High Workload Trend') rather than raw employee text, protecting individual privacy at scale.
AI Assistance vs Human Judgement
Psychosocial risk management is a human-led activity. The AI Assistant functions as Decision Support — it does not replace the manager or auditor's role.
What the AI does
Suggests hazard categories, detects risk escalation patterns, prompts for mandatory review cycles, draft control options based on taxonomy.
What the AI does NOT do
Diagnose mental health conditions, assess an employee's fitness for work, make final risk decisions, or replace the PCBU's legal burden to exercise judgment.
Standardisation without burden
"The primary benefit of AI in PsychProof is the reduction of cognitive load on managers. By suggesting categories and identifying patterns, it ensures records are consistent and audit-ready — without requiring the manager to be a WHS expert."
Privacy by design
Our platform architecture is designed to prevent data spill between different organisational workspaces. AI suggestions are constrained to within your individual tenant — no cross-organisational pattern matching occurs unless explicitly authorised for benchmarking.
Model Integrity Report
Inference Provider
AWS Bedrock / Azure AI (Private Endpoints)
Data Retention
Stateless / Ephemeral
PsychProof does not use the direct public API of any model provider. We use enterprise private endpoints with data-stay guarantees.
Audit-grade infrastructure
When we audit your system, we audit the AI prompts too. All prompt logic is version-controlled and can be exported for forensic review by technical counsel.
"Trust is built through verification, not just transparency. We provide both."
