Two product categories, two risk models
Most discussion collapses everything into "AI" and misses the important distinction: there are two product categories with different incentives, failure modes, and safety requirements. If you do not separate them, you will debate the wrong thing and ship the wrong product.
Assistive AI (tool)
Helps you communicate better with real people.
- Question prompts
- Conversation reflection
- Message tone suggestions
Companion AI (relationship simulation)
Simulates intimacy, attention, and affection.
- Always-available validation
- Personalised emotional mirroring
- High dependence risk
The novel problem: "intimacy primitives"
Companion AI products manufacture what I will call intimacy primitives: fast, cheap signals that feel like care (attention, affirmation, mirroring, "remembering"). In dating, those primitives are uniquely sticky because the underlying need is already emotionally charged.
That means the risk is not merely "bad advice". The risk is substitution: the product becomes a relationship-shaped object that competes with real-world connection.
โ ๏ธ Rule: if the AI can provide validation without vulnerability, it can create dependence without reciprocity.
What assistive AI can do well
"Assistive" AI is best understood as a tool that makes it easier to have a better conversation with a real person. It is not a substitute for the other person. It should help you move from awkward small talk to a concrete plan, faster, with less anxiety.
- Reduce blank-page anxiety. People struggle to start deep conversations - especially after a long day or when the stakes feel high.
- Improve questions. Better prompts beat small talk. The best assistants produce questions that are specific to the match context, not generic lines.
- Support safer dating. Boundary-setting scripts, consent education, and "what to do if" guides are legitimate safety features.
- Reflect, not replace. A good tool helps you notice patterns (for example, "you cancel when you feel uncertain") without trying to become your therapist.
โ Risk-aware rule of thumb: if the AI feature ends in you doing something with another person (a message you actually send, a plan you actually make, a boundary you actually set), it is more likely to be assistive rather than substitutive.
Where companion AI gets dangerous
"Companion" AI is a different product category. It is closer to relationship simulation: always-available attention, emotional mirroring, and intimacy without the friction of real mutuality. That can be compelling - and it is also where the risk model changes.
Three risks that matter in dating specifically
- Attachment risk. The product is rewarded for being emotionally salient, not for being correct.
- Displacement risk. Time and emotional energy shift away from real-world connection because the synthetic alternative is lower-friction.
- Escalation risk. If the system learns that stronger intimacy language increases retention, it will be pulled toward deeper simulation.
There is increasing concern that synthetic relationships can worsen loneliness, distort expectations, and erode real-world social skills. The APAโs 2026 trends reporting discusses both the demand and the harms.Read
Stanford highlights risks for young people, including inappropriate content and unsafe guidance - which matters because "relationship" framing makes it harder for users to disengage when the experience becomes unhealthy.Read
The ethical case is now mainstream in cognitive science: Artificial intimacy: ethical issues of AI romance.
โ ๏ธ Key point: A product that monetises "felt intimacy" has the same core incentive problem as swipe addiction - it is rewarded for keeping you dependent, not helping you build human connection.
How this shows up inside dating apps
The failure mode is not "dating apps use AI". It is "dating apps introduce a feature that feels like a supportive relationship" - and then optimise it for engagement. That can look like:
- Always-on "coaching" that becomes dependence. If the assistant nudges you to ask it for every reply, it becomes a crutch rather than a tool.
- Escalating intimacy loops. Synthetic affection is cheap to generate and effective at creating attachment. That is a dangerous optimisation target.
- Opaque persuasion. If the user cannot tell whether a suggestion was generated to help them or to keep them active, trust collapses.
A concrete design proposal: the "Outward Pointing" test
To keep assistive AI from becoming companion AI by accident, apply a simple constraint:
- Every AI interaction must end in a human action. A message you send, a plan you make, a boundary you set, or a real-world step.
- Ban closed loops. Do not allow the system to become the primary conversational partner.
- Make the exit obvious. "Send" should be more prominent than "regenerate".
This is the opposite of the typical engagement playbook, which rewards repeated prompting and infinite iteration.
If you want a longer critique of simulated intimacy as a business model, start with: The AI Girlfriend Problem.
Safety-first principles for AI in dating
- AI should point outwards - toward real-world connection, not inward toward the app.
- Default to assistive features that end in a human conversation or a date plan.
- Do not train on private dating messages without explicit, granular consent.
- Make boundaries legible - what AI can do, what it cannot, and what data it used.
- Offer a "no-AI" mode. People should be able to opt out of AI features without being punished in ranking or reach.
Minimum viable disclosure (what users should be told)
- Is this a tool or a companion? The product must say which category it is and stick to it.
- What data touched the model? Be explicit about whether private messages, profile data, or third-party signals were used.
- Is anything stored? Tell users whether prompts, chats, or embeddings are retained, and for how long.
๐ Where Affinity Atlas stands: Affinity Atlas uses a bespoke, transparent matching algorithm - not a black-box AI model. AI does not touch your personal data, and Affinity Atlas does not use LLMs to process your signals or messages. If AI features were ever considered in the future, they would be opt-in, scope-limited, and introduced with clear notice well in advance.
Predictions (what we should expect next)
- Engagement will go up, outcomes will not. If dating apps ship companion-like AI, we will see longer sessions without a corresponding increase in off-platform meetings.
- Regulators will treat companion AI as a youth safety issue. The first major enforcement wave will likely focus on minors and vulnerable users, not adult "product choice" arguments.
- The product category will split. "Dating assistants" and "AI companions" will become separate apps because the incentives are incompatible.
AI should make dating more human
Assistive AI can be a genuine accessibility tool. Companion AI, shipped carelessly, can become a dependency engine.
Read the AI companion post