A shared question, asked in two labs
Cognere began not in a boardroom but in two research groups on opposite sides of the Atlantic, working on different corners of the same question.
At TU Delft, the team was studying how people actually learn: how attention fractures under cognitive load, how accessibility is designed out of systems long before the first user arrives, how interfaces either respect or flatten the range of human minds. The work sat at the intersection of Human-Centered AI and cognitive ergonomics. Its underlying assumption was quiet but firm: the measure of an intelligent system is not how clever it is, but how much capability it leaves in the person using it.
At Concordia University in Montreal, a parallel group was working on applied AI and cognitive science: language models for education, multilingual learners, the cognitive cost of chatbot interfaces on novices, and the long tail of learners that mainstream EdTech never names. A recurring finding ran through the group’s experiments: systems that optimized for engagement measurably slowed the transfer of skill. Learners stayed longer and learned less.
The two groups had been reading each other’s papers for years. When they finally met, at a workshop on responsible AI in learning, the conversation did not stay academic for long.

The moment the question stopped being academic
By 2024 and 2025, the pattern was impossible to ignore. Large language models were being absorbed into education and professional training at breathtaking speed, and almost none of it was being built by people who had spent their careers studying how humans actually learn.
Streaks. Notifications. Avatars engineered for emotional attachment. Products that punished absence, celebrated time-on-screen, and measured success in sessions rather than in skill. A generation of consumer AI companions was optimizing for the exact incentives that the published research, on both sides of the Atlantic, had spent a decade warning against.
The Duolingo backlash of 2025 was not surprising to them; it was predictable. The US Senate inquiries into companion apps were not surprising; they were overdue. The steady absorption of enterprise learning platforms into HR suites was not a success story; it was a category being quietly vacated.
Somewhere in that year, the question shifted. It was no longer how would we build AI for learning if we could? It was who else is going to?
The decision
Leaving a tenured lab, or a funded postdoc, to start a company is not a casual act. The decision was made over a long correspondence, across Delft, Montreal, and eventually the Noble Cortex network in between. Three convictions held it together.
First, the science was ready. After a decade of work in cognitive load theory, multimodal learning, accessibility engineering, and language model alignment, there was finally enough evidence to design learning systems that actually respect the full range of human minds. Waiting longer would not add certainty; it would only let the market cement the wrong defaults.
Second, the capital shape had to match the values. Venture funding on quarterly timelines would force the same engagement mechanics the team had spent careers refuting. Noble Cortex offered the only rare thing: patient capital, a family of AI studios that share infrastructure and conscience, and an explicit commitment to conscious, ethical, and life-affirming AI.
Third, a first product could earn the rest. DealParley.ai, a real-time coaching system for sales professionals, would not just be a product; it would be the proof that the studio’s principles survive contact with enterprise revenue. If growth-over-engagement wins in the most adversarial learning environment, it wins anywhere.
Why now
The research is mature. The frontier models are capable enough. The cultural patience for engagement-farmed AI is running out. Regulation in Europe, Canada, and the United States is beginning to punish the exact dark patterns the consumer AI companion apps are built on. Accessibility-first is becoming a procurement requirement. Privacy-by-default is becoming a compliance requirement. Growth-over-engagement is becoming a reputational requirement.
The whitespace, lifelong, multi-modal, accessibility-first, non-engagement-optimized, is real, and for one short window it is vacant. A studio that moves into it now, with operational discipline rather than a manifesto, can define the category before the incumbents notice it is a category.
That is why Cognere. That is why now.