AI Governance
Last updated:
Lala is FoxtINN's AI virtual manager. This document describes the governance, data handling, and safety practices we apply to it.
Grounded responses
Lala does not generate answers from training data alone. Every response is retrieved from your live operational data and cited to the source (shift, store, ticket).
Refusal behavior
When Lala cannot answer with citation-backed data, it refuses rather than guesses. Refusals are logged for product improvement.
Training data
Your operational data is never used to train any public model. Improvements are limited to your tenant or, with explicit consent, aggregated and de-identified.
Bilingual fairness
Lala is evaluated in both English and Spanish across hospitality, retail, restaurants, and trades scenarios. Evaluation results available on request.
Human-in-the-loop
Actions with operational impact (shift offers, vendor work orders) require human approval by default. Auto-approval is opt-in per workflow.
Auditability
Every Lala action is captured in the audit log with the prompt, retrieved context, response, and the user who approved.
Questions? hello@foxtcon.com · Back to Trust Center