Why dating data is uniquely sensitive
Dating apps are not just "profiles". They often contain or infer: precise location, sexual orientation, relationship intent, messages, and behavioural patterns that can be exploited for harassment, blackmail, stalking, and discrimination.
Not just where you live - where you are right now, and when you are alone.
Preferences, kinks, health disclosures, and what you are looking for in a relationship.
Who sees you, who you see, who you reject - and how that affects self-esteem.
Apps can infer religion, politics, schedules, and vulnerability from patterns.
The current privacy record
Mozilla’s 2024 Privacy Not Included review argues that dating apps have gotten worse for privacy, not better. The EFF makes a narrower point: platforms often treat "consent" as a checkbox, not a constraint.
That matters because dating data is unusually actionable. If a retail email list leaks, you get spam. If a dating dataset leaks, people can be outed, tracked, or targeted. If a harassment report is mishandled, a user can be forced to re-encounter someone they reported.
⚠️ Takeaway: In dating, a privacy failure is rarely "just" a data point. It can be a personal safety event.
Why "sensitive" is not just a legal label
In GDPR terms, some information is categorised as "special category" personal data (for example, data revealing sexual orientation). Even when a field is not explicitly labelled, dating systems routinely infer it from behaviour. The result is the same: decisions about collection and retention carry real downside risk for users.
Good privacy posture in dating is not primarily about a policy document. It is about reducing the blast radius of inevitable failures: fewer raw data types stored, less retention time, fewer internal access paths, and a clear user-facing control surface.
Where most apps fail (a simple checklist)
- Overcollection. Collecting data "just in case" it is useful later.
- Overretention. Keeping raw messages, location history, or identifiers indefinitely.
- Opaque sharing. Passing data to advertisers or data brokers with vague disclosures.
- Weak user controls. Deletion that is hard to find, partial, or not auditable.
What privacy-by-design looks like
- Data minimisation as an engineering constraint - collect the minimum, for the shortest time.
- Derived signals - process rich data, store minimal compatibility features.
- Consent-first integrations - explicit opt-in, granular scopes, revocable by design.
- Explainability without oversharing - show overlap and meaning, not raw logs.
- See and delete what we hold - a clear place to view your stored signals in human terms, and delete them so they are gone.
Where Affinity Atlas fits
Affinity Atlas is unusually data-hungry, but that is exactly why it has to be unusually disciplined. The model is: process rich behavioural data to compute compatibility, then store and share only derived, human-meaningful signals.
If you have not read it, the core deep dive is: Privacy by Design in a Data-Hungry App.
Privacy is not a policy. It is architecture.
In sensitive domains, your privacy model has to hold up under worst-case assumptions - not best-case intentions.
Read the privacy deep dive