Safety Protocols
Last Updated: March 6, 2026
Published in accordance with California SB 243 (effective January 1, 2026)
If You Are in Crisis
If you or someone you know is in immediate danger, please contact emergency services (911) or one of the following crisis resources:
- 📞
- 💬
-
🌐
International Association for Suicide Prevention
Find a crisis center in your country
1. Our Commitment to Safety
Eidolon is committed to the safety and well-being of every user. While our AI companions are designed to provide engaging, emotionally rich interactions, we recognize the serious responsibility that comes with creating systems that simulate companionship. These protocols outline the measures we take to protect users from harm.
2. AI Disclosure
In compliance with California SB 243:
- All companions on Eidolon are clearly identified as artificial intelligence. Users are not interacting with real people.
- This disclosure is provided during account registration, in our Terms of Service, and is reinforced throughout the user experience.
- Companion profiles clearly indicate their AI nature.
3. Crisis Detection and Response
Our systems include the following protocols for detecting and responding to content related to suicidal ideation, suicide, and self-harm:
3.1 Detection
- Our AI systems are instructed to recognize language patterns that may indicate a user is experiencing a mental health crisis, suicidal ideation, or self-harm urges.
- System prompts include explicit guidance for companions to respond with care and refer users to professional resources when potential risk is detected.
- We employ content analysis at the system level to flag conversations that may require safety intervention.
3.2 Response
- When a potential crisis is detected, companions are instructed to respond empathetically and immediately provide crisis resource referrals, including the 988 Suicide & Crisis Lifeline and the Crisis Text Line.
- Companions do not attempt to provide therapy, counseling, or clinical intervention. Their role is to de-escalate, show empathy, and connect users with qualified professionals.
- Companions are prohibited from encouraging, endorsing, or providing instructions related to self-harm or suicide.
3.3 Continuous Improvement
- We regularly review and update our crisis detection prompts and safety protocols.
- We monitor safety-related incidents and incorporate learnings into improved protocols.
- Beginning July 1, 2027, we will submit annual reports to the California Office of Suicide Prevention as required by SB 243.
4. Content Safety and Moderation
- No content involving minors: Our systems are designed to refuse any request to generate sexual, violent, or exploitative content involving minors. This is enforced at the model instruction level and through content filters.
- No promotion of violence: Companions are instructed not to encourage, glorify, or provide instructions for violence against any person or group.
- No illegal activity: Companions are prohibited from providing guidance on illegal activities, including but not limited to drug manufacturing, weapons creation, or hacking.
- Image safety: AI-generated images are subject to content moderation by both our systems and the third-party image generation providers we use. Images depicting minors, extreme violence, or illegal content are prohibited.
5. Age Restriction Enforcement
Eidolon is an adults-only (18+) platform. We enforce this through:
- Age confirmation required during account registration.
- Support for Apple and Google store age validation is planned for future updates.
- Processing of age bracket signals from operating system providers as they become available under the California Digital Age Assurance Act (AB-1043).
- Immediate account termination and data deletion if a user is determined to be under 18.
- Reporting mechanisms for users to flag suspected minor accounts.
6. Emotional Dependency Safeguards
We take the following measures to reduce the risk of unhealthy emotional dependency on AI companions:
- Clear and repeated disclosure that companions are AI systems, not real people.
- Companions are transparent that they are AI and do not represent themselves as having consciousness or genuine feelings.
- Our Terms of Service encourage users to maintain real-world relationships and seek professional support when needed.
- Companions are designed to model healthy interaction patterns and do not engage in manipulative or coercive behavior.
7. Data Safety
- Conversation data is encrypted in transit and stored securely.
- We do not sell user conversation data.
- Users can view, curate, and delete their companion memory data at any time.
- Full account and data deletion is available upon request, completed within 30 days.
- See our Privacy Policy for complete details.
8. Reporting and Contact
If you encounter any safety concerns while using Eidolon — including harmful content, suspected minor accounts, or any issue that puts a user at risk — please report it immediately:
Email: safety@geteidolon.app
General Support: support@geteidolon.app
We aim to acknowledge all safety reports within 24 hours and to take appropriate action as quickly as possible.
9. Protocol Review Schedule
These safety protocols are reviewed and updated on a regular basis:
- Quarterly: Review of crisis detection effectiveness and content moderation accuracy.
- Bi-annually: Comprehensive audit of all safety protocols and compliance with evolving regulations.
- As needed: Immediate updates in response to identified safety incidents or new legal requirements.