Back to home

Neural Network Group Ltd

AI Usage Policy

This policy explains the expected use of Saint AGI's AI capabilities, including governance, human review, and prohibited uses.

Last updated 17 March 2026

Purpose of this policy

Saint AGI is designed to help organizations run governed AI agents across real workflows. This policy sets expectations for customers and end users who deploy the service across internal tools, knowledge sources, and business processes.

Use of Saint AGI must remain lawful, responsible, and consistent with your organization's internal approvals and risk policies.

Human responsibility remains in place

AI outputs and agent actions should be reviewed at a level appropriate to the task. Sensitive changes, external communications, regulated workflows, financial activity, employment decisions, and security-impacting actions should receive meaningful human oversight.

You are responsible for the prompts, data sources, instructions, and actions configured in your use of the service, including downstream effects of automated decisions or executions.

Prohibited uses

You may not use Saint AGI to generate or facilitate unlawful content, fraud, harassment, malware, unauthorized access, deceptive impersonation, or activity that infringes privacy, intellectual property, or other legal rights.

You may not use the service in a way that violates employment law, anti-discrimination law, consumer law, sanctions rules, export controls, or any other law that applies to your organization or users.

  • No deployment for illegal surveillance or unauthorized monitoring.
  • No autonomous execution in high-risk contexts without suitable human review and legal basis.
  • No attempts to disable safety, approval, logging, or governance features without authorization.

Data handling and model use

You should only submit data to Saint AGI when you have the right and legal basis to do so. Before using personal, confidential, or regulated data, ensure that your organization has approved the workflow and that the selected model path is appropriate for the sensitivity of the task.

Where the product offers different routing, approval, or visibility controls, you are responsible for choosing settings that fit the relevant level of risk.

Monitoring and enforcement

We may monitor service use for security, abuse prevention, product integrity, and compliance with our terms and policies. This can include reviewing operational metadata, audit events, approval trails, and support-related information.

We may suspend or restrict use that poses a legal, security, or reputational risk to us, our customers, third parties, or the public.

Questions and escalation

If you need clarification on appropriate use of Saint AGI, contact hello@neuralnetworkgroup.com. If you believe the service has been used in a way that breaches this policy, notify us promptly using the same address.

Organizations using Saint AGI should maintain their own internal approval, escalation, and review processes for higher-risk AI activity.