top of page

Ethical AI Policy for On-Premises Systems

Last updated: 2 November 2025
Company: Inteliate Ltd (Company No. 15514345)
Registered office: London, United Kingdom
Contact: contact@InteliATE.com (subject: Ethics)

​

1) Policy statement & purpose

Inteliate builds AI that runs on your infrastructure. We design for security, legality, and human oversight by default. This Policy sets the rules for how Inteliate and our customers design, deploy, and use Inteliate AI systems on‑premises.

Applies to: all Inteliate staff, contractors, partners, and customer users of Inteliate software and the customer portal.

2) Core principles

  • Accountability: Clear ownership for models, data, and decisions; auditable actions end‑to‑end.

  • Fairness & non‑discrimination: Mitigate bias in data and models; test and monitor for disparate impact.

  • Transparency: Document scope, limits, data provenance, and known failure modes; provide explainability features where feasible.

  • Human oversight: AI augments; people decide. Guardrails prevent automation from bypassing human review.

  • Security by design: Offline by default; least‑privilege, role‑based access; tamper‑evident logs.

  • Proportionality: Collect and process only what’s needed for the task; minimize retention.

3) Development & testing standards

  • Data selection & curation: Use lawful, relevant, representative datasets; document sources and licenses.

  • Bias mitigation: Apply sampling, weighting, and evaluation to reduce unfair outcomes; record metrics and trade‑offs.

  • Model documentation: For every model, maintain a concise Model Card: purpose, inputs/outputs, limits, risks, evaluation metrics, and change history.

  • Reproducibility: Record training/config parameters and datasets used; version all artifacts.

  • Explainability: Prefer interpretable methods or attach explanations (saliency/rationales, tracebacks to sources) where feasible.

4) Data ethics & use

  • Lawful origin & purpose limitation: Data must have a legitimate source and a defined, lawful purpose; no secondary use without approval.

  • On‑prem only: All data stays in the customer’s environment. No backhaul to Inteliate.

  • Zero‑knowledge posture: Inteliate personnel do not access customer data unless explicitly authorised in a time‑boxed, logged support window under contract.

  • Minimisation & retention: Use the least data necessary; retention is customer‑controlled; provide secure purge tools.

  • Traceability: Every result should be traceable to source inputs where practicable.

5) Deployment & operational integrity

  • Offline by default: No internet connectivity required or assumed; any outbound link must be explicitly enabled, justified, and audited.

  • Baseline hardening: Encrypted storage, MFA for admin roles, signed releases, integrity checks, secure update process.

  • Role‑based access control: Users only see what they need; admin changes require approval workflows.

  • Immutable audit logs: Log uploads, queries, configurations, model changes, and exports; store locally, encrypted, tamper‑evident.

  • Support access: Dual‑control ephemeral credentials; customer can observe and revoke at any time; all access logged.

6) Human oversight & safe use

  • Decision rights: Investigators/analysts/operators retain final say; AI outputs are advisory.

  • Uncertainty & limitations: Surfaces confidence/ambiguity flags and known error modes; force review when thresholds are met.

  • No dark patterns: Interfaces avoid coercive UI; users can see what a feature does and configure/disable where allowed.

7) Prohibited uses & high‑risk gating

Inteliate systems must not be used for:

  • Unlawful discrimination or targeting based on protected characteristics.

  • Social scoring of individuals or groups.

  • Undisclosed mass surveillance, or real‑time biometric identification, unless explicitly lawful, contractually permitted, and approved through the ethics review (see §8).

  • Automated decisions with legal/similar significant effects without required human review and lawful basis.

High‑risk use cases (e.g., public‑sector enforcement, safety‑critical workflows) require an Ethics & Risk Assessment and explicit senior approval before go‑live.

8) Governance & oversight

  • Roles:

    • Executive owner: accountable for policy effectiveness and resourcing.

    • AI Ethics Lead (or delegate): maintains standards, chairs reviews, signs off high‑risk deployments.

    • Engineering leads: enforce SDLC controls, documentation, and testing.

    • Customers: control local data, users, and lawful use; run local access approvals.

  • Ethics Review: Standing group (eng/PM/legal/security) reviews new features/integrations, high‑risk deployments, and red‑flags. Can pause or block releases.

  • Records: Keep model cards, risk assessments, approvals, and audit logs for the duration required by contract/law.

9) Monitoring, drift & continuous improvement

  • Operational monitoring: Check outputs for bias and drift against baselines; alert on anomalous behaviour.

  • Updates & retraining: Validate against benchmark suites before release; document changes; provide rollback.

  • Feedback loop: In‑product or portal feedback routes; classify, track, and respond; incorporate fixes into releases.

  • Periodic audits: At least annually, review models, data pipelines, and access logs; remediate findings.

10) Third‑party data & integrations

  • Due diligence: Vet vendors/APIs/feeds for legality, licence terms, privacy posture, and reliability.

  • Allow‑list only: External calls (if enabled) restricted to approved endpoints; all traffic logged.

  • OSINT & sensitive sources: Record provenance and usage constraints; tag context to avoid scope creep.

  • Custom integrations: Subject to technical and ethical compatibility review; document shared responsibilities.

11) Legal & regulatory compliance

  • Data protection: Comply with applicable data‑protection laws (e.g., UK GDPR/EU GDPR). Inteliate typically is not a processor; if support requires data access, a DPA must be executed first.

  • Sectoral rules: Customers remain responsible for lawful use in their jurisdiction (e.g., public‑sector, security, or financial regulations).

  • Export/sanctions & localisation: Respect export controls, sanctions, and data‑residency mandates.

  • Records & disclosure: Maintain documentation required by law; cooperate with lawful audits and requests.

12) Incident reporting & response

  • Report channels: contact@inteliate.com (subject: Ethics/Incident) or customer’s internal channel.

  • Timelines: Prompt triage; high‑severity issues prioritised; root‑cause analysis with corrective actions.

  • Customer control: Customers can isolate systems, revoke support access, and export logs for forensics.

13) Training & awareness

  • Inteliate staff: mandatory onboarding and annual refresh on responsible AI, data protection, bias, and secure on‑prem operations.

  • Customer training: role‑based training at deployment; covers limits, audit features, safe operation, and escalation paths.

  • Non‑retaliation: Good‑faith reporting is protected.

14) Policy maintenance

  • Review cadence: Annual review or on major legal/product change.

  • Change control: Version‑controlled; material changes notified to customers at least 30 days before enforcement.

  • Exceptions: Only by written approval of the CEO and CTO, with recorded rationale and expiry.

Appendix A — On‑prem baseline checklist (publish or keep internal)

Before go‑live, confirm:

  1. Offline mode (no egress), or documented/approved allow‑lists.

  2. Storage encrypted, backups tested, keys managed locally.

  3. RBAC mapped to job roles; admin changes require approvals.

  4. Immutable local audit logging enabled and retained per contract.

  5. Model Cards completed; risk assessment filed; bias tests passed.

  6. Support access disabled by default; dual‑control temporary access only.

  7. Kill‑switch/rollback documented and tested.

  8. User training delivered; escalation routes tested.

bottom of page