What the algorithm is deciding
- Exposure: who you are shown and how often.
- Ranking: whether you are "high value" or "low value" in the feed.
- Timing: when you get matches and notifications.
- Incentives: which frustrations steer you toward subscriptions and boosts.
Why explanations matter
In dating, people internalise outcomes. If you are shown fewer matches than expected, it is easy to conclude something is wrong with you - not the system. This is exactly why opacity is harmful: it turns algorithmic decisions into self-judgement.
When you cannot distinguish "the system did not show you" from "people rejected you", you cannot interpret the experience without anxiety.
There is also a straightforward accountability argument. A 2025 paper, Dating Apps and the Right to an Explanation, frames explanation as an ethical requirement in systems that materially shape users’ opportunities and self-perception.
The law is moving (slowly) in this direction
GDPR already includes protections around automated decision-making (often discussed via Article 22). The EU is also steadily increasing transparency expectations around AI and recommender systems in general. Two relevant anchors:
- GDPR (Regulation (EU) 2016/679) - the baseline rights framework.
- Digital Services Act (Regulation (EU) 2022/2065) - increasing obligations around recommender systems and systemic risk in platforms.
Even if dating apps stay outside the strictest categories, the direction of travel is clear: if an algorithm is shaping your life, you should be able to interrogate it.
"Explanation" should mean more than UX copy
Most platforms already have a vague story about matching: "we learn your preferences" or "we show you people you may like". That is not an explanation. It is marketing language. A meaningful explanation has to help you answer practical questions, like:
- Why was I shown this person? What signals drove it?
- Why am I seeing fewer matches? Is it reach, ranking, or my filters?
- What would change the outcome? More data, different settings, or different behaviour?
If you cannot answer those questions, you are not being explained to - you are being managed.
What a good explanation looks like
Affinity Atlas’s approach (described in Building a Transparent Algorithm) is simple:
- Show the overlap. Shared signals that drove the match.
- Show the weights. What mattered more and why.
- Show confidence. How much data the score is based on.
- Hide raw logs. Transparency without oversharing.
- Make it auditable. Every match card should be drill-downable so you can inspect the breakdown without exposing the other person's private raw data.
🧾 The standard: A user should be able to answer "Why was I shown this person?" in one screen, without guessing.
Transparency that works in practice
Affinity Atlas publishes the formula and explains each match in human terms - without exposing your raw private data.
Read the transparency deep dive