Constrained Choices and Compliance
Irina Raicu is the director of the Internet Ethics program (@IEthics) at the Markkula Center for Applied Ethics. Views are her own.
Wired magazineâs âspiritual advice columnist,â Meghan OâGhieblyn, who noted that a streaming music app is âscarily good at predicting songsâ that the reader would like and asked, âDoes that make me boring?â
OâGhieblyn redefined the question: âI'm willing to bet,â she wrote, âthat your real anxiety is not that you're boring but that you're not truly free. If your taste can be so easily inferred from your listening history and the data streams of âusers like youâ (to borrow the patronizing argot of prediction engines), are you actually making a choice?â
Later in the column, however, she noted that users of services that include recommender algorithms, like the questioner, do make choicesâbut choices that are themselves shaped by the algorithms:
On , we quickly scroll past posts that don't reflect our dominant interests, lest the all-seeing algorithm mistake our curiosity for invested interest. Perhaps you have paused, once or twice, before watching a film that diverges from your usual taste, or hesitated before Googling a religious question, lest it take you for a true believer and skew your future search results.
These are choices born of restrictions. They are efforts to mollify the algorithmâs rigid and itself limited perspective, lest it go from just overly simplifying to being outright wrong about our interests.
For the subset of users who understand the impact of the algorithms enough to try to appease them, the answer, then, is yesâthat makes one boring; but the âthatâ is not the usersâ predictability but the compliance with the algorithms. In this scenario, itâs the user whoâs been trained by the algorithmânot the other way around. The user acquiesces to being a shell of his or her âdominant interests,â concerned about the consequences of trying (or learning about) new things.
While fully acknowledging this reality, OâGhieblyn writes that she doesnât âadvise embracing the irrational or acting against your own interestsâ as a response. âIt will not make you happy,â she argues, ânor will it prove a point.â Disagreeing with this view, in a different take on recommendation algorithms, Clive Thompson (who also often writes for Wired) ; he claims that acting out against the algorithms is not, in fact, against oneâs own real interests. As he puts it,
our truly quirky dimensions are never really grasped by these recommendation algorithmsâŠ. Theyâre not wrong about us; but theyâre woefully incomplete. This is why I always get a slightly flattened feeling when I behold my feed, robotically unloading boxes of content from the same monotonous conveyor-belt of recommendations, catered to some imaginary marketing version of my identity. Itâs like checking my reflection in the mirror and seeing stock-photo imagery.
In other words, to offer a different answer to the Wired questioner who asked about the music app, the recommendation algorithms do make you boringâand staticâif you allow them to do all the work of finding music or other âcontentâ for you.
To break that false mirror that Thompson mentions is therefore not to âembrace the irrationalâ but to try to embrace your full self. In , Thompson offers a variety of suggestions for how one might go about ârewildingâ oneâs imagination in an age of recommendation algorithmsâwhile acknowledging that this requires more effort on our part.
If your music appâs recommendations are too accurate, you might not be boring but just stuck in a rutâperhaps in need of a reminder that might come (serendipitously) from : âYou are under no obligation to remain the same person you were a year ago, a month ago, or even a day ago. You are here to create yourself, continuously.â
Itâs important, also, to note that recommendation algorithms used in the context of, say, music streaming apps have very different societal impacts than those used in social media feeds or in news media outlets. The latter categories of recommenders have been accused of being partly responsible for increased social polarization, filter bubbles that impede understanding, radicalization, and other significant negative consequences that go far beyond making us âboring.â
It seems only fitting to conclude this post with a couple of recommendations for further reading:
- a useful analysis and taxonomy published in 2020 by Silvia Milano, Mariarosaria Taddeo, and Luciano Floridi, titled
- a blog post by Claire Leibowicz, Connie Moon Sehat, Adriana Stephan, and Jonathan Stray, about research conducted by the Partnership on AI, titled â
Image: , by , is licensed under