About Enero 1
Who We Are
Our Data Story
Enero 1 addresses a critical gap in AI training by capturing real human interactions that show how conflict escalates, stabilizes, and resolves. Through consent-based mediation sessions, these interactions are securely transcribed, anonymized, and structured to reflect emotion, intent shifts, and resolution dynamics. The result supports the training of emotionally aware, de-escalatory AI systems built for real-world use.

Our Data Vision
Enero 1 aims to become a trusted source of real human interactions that help AI systems understand disagreement, emotion, and resolution in real-world settings. Our long-term vision is to support the development of AI that can navigate conflict responsibly, reduce escalation, and operate with greater human awareness at scale.
Our Founder
Victoria Donates - Founder

My Story
During my mediation training, I became aware of a recurring pattern: most conflicts are driven less by factual disagreement than by missing context, emotional overload, and breakdowns in interpretation. These dynamics play a central role in de-escalation and resolution outcomes, yet they are rarely captured or preserved in a form that others can study or reuse. As AI systems increasingly participate in human-facing, high-stakes conversations, this absence becomes a structural limitation. Enero 1 was created to ethically capture real mediation interactions and translate them into structured, analyzable training material that strengthens contextual reasoning and decision-making in human-facing AI models under emotional and social uncertainty.
Contact Us
Get in Touch with Our Team Today