The black box problem
Open any mainstream dating app. You will be shown a series of profiles. Some will feel like good matches. Most will not. You have no way of knowing why any of them were shown to you.
Was it because you are both 28 and live within 10 miles of each other? Was it because you both swiped right on similar-looking people? Was it because you have not been active in a while and the app is trying to re-engage you? Was it because the other person paid for a "boost" and your feed is now partly an advertising channel?
You will never know. Not because the information is too complex to communicate, but because the platforms have chosen not to tell you.
This is the black box problem. It is not unique to dating apps - it affects recommendation systems across every industry. But in dating, the stakes are uniquely personal. The decisions an opaque system makes about who you see, who sees you, and how you are ranked directly shape your romantic life. And you have no visibility into any of it.
💔 The emotional cost: Research from Penn State has shown that when a dating app suggests fewer matches than expected, users tend to internalise the outcome - they think something is wrong with them, not with the system. An opaque matching system does not just produce bad matches. It produces self-doubt.
Why dating apps keep their algorithms secret
The standard justification is intellectual property protection. If you reveal how the matching system works, competitors can copy it. There is some truth to this - but it is not the whole story.
Gaming prevention
The most legitimate concern is that transparency enables gaming. If users know exactly how the system works, they can optimise their behaviour to exploit it rather than engage authentically. On social media, this has led to entire industries built around algorithm manipulation (SEO, growth hacking, engagement bait).
This is a real concern. But it assumes that the only way to prevent gaming is secrecy - which is not true. Affinity Atlas handles this differently (more on that below).
Revenue protection
The less comfortable truth is that transparency would reveal how much of the matching experience is shaped by monetisation. If users knew that their visibility was being throttled to sell boosts, or that paid subscribers were systematically shown first, or that "likes" from blurred profiles were being withheld to extract subscription revenue - the backlash would be significant.
Opacity is not just a technical decision. It is a business decision. It protects the revenue model from user scrutiny.
Liability avoidance
A 2025 paper in the Journal of Applied Philosophy argued that dating app users have a moral right to know whether the system they are interacting with is "stacked against them." The authors noted that if platforms were transparent about their matching logic, they would also be accountable for it. Opacity eliminates accountability. If you do not know how the system works, you cannot challenge its decisions.
The right to know why
The case for transparency is not just a philosophical argument. It is supported by both regulation and research.
The regulatory landscape
The EU's AI Act, which entered into force in 2024, establishes transparency obligations for AI systems - particularly those that interact with people or make decisions that affect them. While dating apps are not currently classified as "high-risk" systems under the Act, the direction of travel is clear: systems that affect people's lives should be explainable.
GDPR already includes a right to explanation for automated decision-making (Article 22). Whether dating app matching qualifies as "automated decision-making" under GDPR is debated - but the principle that people deserve to understand systems that affect them is well established.
The trust research
A 2024 study on explainability in recommendation systems found that integrating explainability techniques (specifically LIME and SHAP methods) into recommendation models produced "significant improvements in the precision of the recommendations" alongside a "notable increase in the user's ability to understand and trust the suggestions." Transparency does not just feel good - it measurably improves both the system and the user's relationship with it.
Research from Wharton examining trust in AI systems - specifically in the context of predicting speed-dating outcomes - found that trust is fragile and closely tied to whether users can understand and verify the system's reasoning. When the reasoning is hidden, trust erodes quickly, particularly after a bad experience.
The TTC Labs project (a cross-industry initiative focused on privacy and transparency) specifically studied dating apps and concluded that users "want more insight into how opaque algorithms are suggesting their dating matches" and that "transparency will make them feel more comfortable and ready to engage with the app."
Three levels of transparency
Transparency is not binary. You cannot just "be transparent" without defining what that means. Affinity Atlas implements transparency at three distinct levels:
The compatibility formula is published and explained. Users know the general structure: Commonality × NicheWeight × SignalWeight × Confidence. They understand that niche overlaps are weighted more heavily, that each category has its own weight, and that confidence scales with data depth. The formula is not a secret.
Every match comes with a detailed explanation. The match card shows which specific interests contributed, what each one's niche weight was, how the category weights broke down, and what the confidence level is. You can see exactly why this person was suggested.
Users can inspect the numerical breakdown of any match. The raw numbers behind each signal, the multipliers applied, and the final weighted sum. This is the "show your working" level - full audit trail for those who want it.
Most users will engage with Level 2 - the match card explanation. It is designed to be human-readable, not technical. Level 1 is for those who want to understand the system conceptually. Level 3 is for the data-literate users (the "Curators" from the brand personas) who want to verify the maths for themselves.
No mainstream dating app offers any of these levels. The closest is OkCupid's compatibility percentage, which gives you a number but not the reasoning behind it.
The match card: transparency in practice
The match card is the primary interface for transparency in Affinity Atlas. Here is what one looks like:
Every element on this card answers a question:
- "Why was this person suggested?" - The top contributing signals show exactly which shared interests drove the match.
- "How compatible are we really?" - The Affinity Score gives a percentage, and the category breakdown shows where that number comes from.
- "How reliable is this score?" - The confidence indicator shows how much data the score is based on. A match with 412 shared signals is more trustworthy than one with 12.
- "What makes this match special?" - The niche weight multipliers highlight the rare overlaps that are the strongest compatibility signals. A 3.8x niche weight on Lingua Ignota tells you this is not a generic match.
What the match card does not show
Transparency does not mean showing everything. The match card deliberately excludes:
- Your full library. Other users see the overlap, not your complete Spotify history or Steam library.
- Negative signals. The card focuses on what you share, not what you do not. If you have no overlap in books, books simply will not appear as a contributing category - it does not show "0% book compatibility" as a discouraging metric.
- Behavioural metadata. How often you listened to an artist, how many hours you played a game, when you last synced - this data informs the score but is not exposed on the card. The principle is: show the meaning, not the metadata.
- Other people's data. You only see data from integrations both you and your match have connected. If they connected Spotify but not Steam, you will not see gaming data on their card, even if you have connected Steam yourself.
The trade-offs of showing your working
Transparency is not free. There are real trade-offs, and Affinity Atlas has to navigate them deliberately.
Gaming risk
If users know that niche interests carry higher weight, could they fake niche taste? Could someone add obscure artists to their Spotify to inflate their compatibility with niche-heavy users?
Mitigation: Commonality is not binary. The system factors in listening frequency, recency, and depth - not just presence. Adding Lingua Ignota to your library but never actually listening produces a near-zero commonality signal. The niche weight is high, but multiplied by near-zero engagement, it contributes almost nothing. Gaming would require genuinely listening to obscure music for extended periods - at which point, it is no longer gaming. It is just developing taste.
Information overload
Not every user wants a detailed technical breakdown. For some, a percentage and a few shared interests is enough. For others, the raw numbers are the whole point.
Mitigation: The three-level transparency model handles this. Level 2 (the match card) is the default - visual, human-readable, and concise. Level 3 (raw scores) is opt-in, accessible via a "View raw scores" link for those who want it. The system adapts to the user, not the other way around.
Intellectual property exposure
By publishing the formula and explaining the matching logic, Affinity Atlas makes it easy for anyone to replicate the approach. A well-resourced competitor could implement the same system.
Position: This is a deliberate choice. Affinity Atlas is a passion project. The goal is to demonstrate that transparent, niche-weighted matching works - not to hoard the idea. If a larger platform adopted this approach, that would be a win for users, even if it is a competitive loss for Affinity Atlas. Transparency is the product, not the packaging around a secret product.
Expectation management
When you show a 78% compatibility score with a detailed breakdown, users take it seriously. If the match does not work out, the perceived failure is sharper because the system made a specific, visible claim.
Mitigation: The match card includes context. The confidence indicator sets expectations: a high-confidence 78% based on 400+ signals is different from a low-confidence 78% based on 15 signals. The system is honest about what it knows and what it does not.
What explainability research tells us
The field of Explainable AI (XAI) has been growing rapidly, driven by both regulatory pressure and genuine demand for trustworthy systems. The research consistently supports the approach Affinity Atlas is taking.
Explanations improve accuracy
A 2024 study that applied LIME (Local Interpretable Model-agnostic Explanations) and SHAP (SHapley Additive exPlanations) to recommendation systems found that explainability did not just improve user understanding - it improved the accuracy of the recommendations themselves. The act of building explainable systems forces designers to understand their own models better, which leads to better models.
Transparency builds trust asymmetrically
Trust research consistently shows that bad experiences destroy trust faster than good experiences build it. In dating apps, this is critical: a single obviously bad match can undermine months of good ones. Wharton's research on AI trust in dating contexts found that explanations help cushion bad outcomes. When a match does not work out but the user can see why it was suggested ("you both love the same obscure game and share 4 niche beer preferences"), the system's credibility survives the failure. Without explanation, a bad match feels like evidence that the system is broken.
Transparency reduces algorithmic anxiety
Research from Penn State's Center for Socially Responsible AI found that users experience genuine anxiety about opaque dating algorithms. Users wonder whether the system is working against them, whether they are being deprioritised for not paying, whether something about their profile is being penalised. Transparency directly addresses this anxiety by replacing speculation with information.
The declining standard
A 2025 Stanford analysis found that AI companies now average just 40 out of 100 on transparency, marking a significant decline from the previous year. The industry is moving in the wrong direction. Dating apps, which deal with some of the most sensitive decisions in people's lives, are following this trend rather than challenging it.
Affinity Atlas is built on the premise that this trend is wrong. Not just ethically wrong - practically wrong. Transparency produces better systems, more trust, and better outcomes. The research supports it. The user demand is there. The only thing standing in the way is the business model - and Affinity Atlas does not have that constraint.
🔍 The core principle: If you cannot explain why two people were matched, you do not understand your own system well enough. Affinity Atlas is designed so that every match can be explained - not as an afterthought, but as a fundamental requirement of the architecture.
See transparent matching in action
The interactive demo shows real match cards with full breakdowns. No black boxes. No mystery. Just compatibility you can see.
Try the demo