There are no curses—only mirrors
The algorithms always know when I'm having a back flare-up, because I tell them. Lots of searches for "psoas" and "sacroiliac" and "piriformis" followed by "release" and "relax" and "pain-relief". Pick a feed - IG, the clock app, YT - and it will be serving me up physiotherapists and mobility specialists and occupational therapists of every stripe and discipline. And I will reinforce those suggestions and results by interacting with them (and a dismissal is an interaction, always).
Search engines, algorithmic recommendation engines, advertisers - they are able to serve up "relevant" things about us because their designers and engineers built them to (or in same cases, failed to build them not to) deliver certain outputs in response to certain kinds of prompts and signals, explicit and implicit.
It is de rigeur at the moment to have An Opinion on AI.
I have a lot of thoughts about this, because I've been doing the reading here for a long time, first as an amateur, then as a student, and now as a professional whose job could be defined as "always needing to be up to speed on the biggest things moving markets right now and having a plan for what to do in response, journalistically".
My number one thought has been, practically from the beginning, about what is presented an inevitable tradeoff between utility and privacy. This is not a new presentational dichotomy, and it is certainly not one specific to AI.
The Disney+ app wants me to tell it a date of birth and a gender with which it should associate my account so it can better deliver targeted advertising. The utility argument here starts at "user experience" and ends at "use of the service itself" since you cannot opt-out of providing a date of birth, though you can "prefer not to say" for gender.
A majority of the cars available for hailing on ride services like Uber and Lyft are using in-vehicle surveillance cameras, and in some cases audio-recording devices - and there is no way to know whether that will be true for your ride until the car shows up. The utility argument here is "safety".
To sign up for an OpenAI account, you have to provide a phone number "for security reasons". Further, OpenAI does not support "use of landlines, VoIP providers, or Google Voice at this time." Do you know how much information your phone number reveals about you to advertisers and people in search of a business number? It is the reason every retailer on the planet will offer you some nominal incentive if you "sign up for texts". Both Facebook and Twitter used the phone numbers people provided for multi-factor authentication to serve them advertising. "An error", Twitter said.
These considerations - do you want privacy or security? A "better advertising experience" or irrelevant spam? Access to the buzziest consumer-facing AI front-end of the moment or not? - are presented as either/or, if they are presented as choices at all.
Our job, as people, is to first be aware that someone, usually a very large or very well-funded corporation, is benefiting enormously from the "opt-in" defaults. And second, to ask ourselves why and whether these are the only current options.
Attribution:
This alone is what I wish for you: knowledge. To understand each desire has an edge,
to know we are responsible for the lives
we change. No faith comes without cost,
no one believes without dying.
There are no curses, only mirrors
held up to the souls of gods and mortals.
— from Demeter's Prayer to Hades by Rita Dove