The problem isn’t that someone’s going to come and make it a one-on-one problem for you (usually). It’s a societal-level problem.
Say for instance that you give your DNA to 23andMe, essentially paying them to give them your most permanent information. To the company, you’re inconsequential, but your data aggregated with millions of other people’s is their entire business. It’s the thing they have to sell, which they do.
Now suppose, in that data set you’ve volunteered to contribute to, there are correlations that can be made amongst Disease A and Diseases B, C, and D. This is the stuff of research careers, so eventually someone does make those correlations and makes a bit of a career out of it, and the correlations have a moment in the sun before the researcher moves on to something else.
There are laws against forcing employees to disclose personal health information. Your employer can’t demand your DNA to find out how expensive you and your kids will be to insure. Suppose, however, you need some time off because you’re dealing with Disease B. It’s not a terribly serious disease, but it flares up now and then. However, HR’s statistical modeling now notes the correlation and says you have a decreased risk of A, but an increased risk of C and D, and D happens to be a very expensive disease. You don’t have D. You’ve never seen a doctor about D, never filed a claim about D, maybe have never even heard of D. However, you are an at-will employee and things are already very dicey with your firm’s healthcare costs, and soon you are a former employee. You won’t know why you were terminated because nobody’s required to tell you: you’re at-will.
That’s a relatively benign way in which the dataset use can go wrong. (Though from the perspective of actuaries, it hasn’t gone wrong at all: it’s worked just right.) There’s currently a very large contingent of autistic people in Britain – some of them researchers themselves – freaking out about Simon Baron-Cohen’s recently-launched genetic-research program into autism, collecting the DNA of 10K autistic people, because they fear that between bias against autistics/autism, various normal profit and career motives, and their relative political helplessness, the database will be susceptible to interpretations that will lead to their further abuse and suffering. (Consider current autism therapies that allow school administrators to apply electrical shocks to autistic students. I’ve been hearing impassioned argument about ABA for at least 15 years.) I’m not sure why this study in particular has attracted so much interest, unless it’s that it’s Baron-Cohen’s – there are already much larger datasets out there and available – but again, the issue is that large data that’s also highly personal can be pretty readily used in persecution, including polite forms of persecution that come with all the right university and medical-center stickers. And unfortunately it doesn’t help the persecuted if, 70 years later, some people notice what happened and then there’s general agreement that it was very very wrong and should never happen again.
There’s potential for the problems to be direct and personal, though if you are utterly conventional, part of a melting majority, and go-along in all ways, this probably won’t be a worry. Considering though that facial recognition is already used to identify entire crowds’ worth of protesters, and that such info is already connected to social media, credit card, transit, and city-security-camera activity, I cannot think that having the ability to connect that info with their genetic info would be a good thing. So if you think you’ll ever be in a protest crowd, or writing a sharply critical letter to an editor, or on the wrong side of an autocrat, I would not go handing out info you can’t disconnect from yourself.