What makes the internet addictive?
The next hellsites. What makes us stay online. Bluesky. The allure of simplicity and the inexorable pull toward complex design features eroding our self-control.
There’s been a massive exodus from X/Twitter to Bluesky, especially among health, science, and other academic types. In my own social media use—more lurking than anything else—I’ve learned a fair amount and met good people by following #MedTwitter and recovery science communities, so I’m giving Bluesky a try (if you’re there, follow me and drop a line). For now, at least, it does seem to be less angry, give people more control over the content, and in general have a higher signal/noise ratio. It is a decent platform for discovering writing I might not have seen otherwise. It has a simple, reverse chronological feed, as opposed to one that’s algorithmic and peppered with ghastly ads. It does not appear to be run by a miserable, angry person vomiting his pain and insecurity upon the rest of the world. Do I like social media, or the seemingly constant shuffle from platform to platform that seems to be a feature of the epoch we have been born into? Of course not. (A funny bio I saw there: “christ jesus another platform”). But Bluesky does seem to have some value for connection and information discovery.
Can Bluesky resist the seemingly inexorable tractor beam pull of internet negativity? Can it stay friendly, pragmatic, scholarly? Is there something specific about Bluesky’s design that will help? For that matter, what specific design features make internet platforms hard to resist?
When activities move online, their harms tend to get worse. This is a big focus of addiction research today. Consider gambling: online gambling allows for 24/7 access, quicker game cycles, the disinhibition that comes with anonymity, and powerful algorithmic nudges (e.g., “happy hours” or other personalized push notifications). There are analogous problems with online shopping (reducing friction with things like one-click purchases and saving payment methods across platforms), social media (endless scrolling), news consumption, streaming content, porn, even dating. But especially with the rise of concern about social media, exemplified by Australia’s recent ban on social media for those under 16 years old, there’s also increasing debate about what exactly is going wrong.
Many people accuse technology of being “addictive,” like the way certain drugs are assumed to be, but the reality is more complex and depends on specific design features. The clumsy analogy to substance problems is this: technology that causes behavioral problems must be inherently “addictive” in a way analogous to alcohol or cocaine. By this logic, smartphones, or video games, or social media—or perhaps the specific type of social comparison that social media engenders, or specific design features of only certain video games, etc.—must trigger some sort of dopamine-related learning process in a way that is significantly different from other types of stimuli. But the examples above show that a variety of different design features are involved, and they vary and combine across categories of online behaviors as well as across platforms within those behaviors. Gambling on cryptocurrency is similar, but distinct, from betting on sports, and even different sports betting platforms have different nudges and design components that seem to erode control. We also can’t reduce it all to dopamine-related processes: perhaps some design features involve reward- or salience-based learning, but that’s not the whole story.
We need more conceptual and operational clarity here. Recent research and commentary has organized and identified specific technology design features that can weaken our self-control. One of the better descriptions, which I saw presented at Lisbon Addictions a few weeks ago by lead author Maèva Flayelle, presents a classification of six different technological design features that promote addictive online behavior. Let me try to summarize what I took from their presentation and paper:
Reinforcement schedules: We know from a long line of research that variable and unpredictable rewards promote more compulsive use. The most obvious random-ratio reinforcement schedules are seen in gambling (even at the most basic level, in the way that the outcome of each bet is independent of previous bets), video games, and social networking.
Personalized triggers: Algorithms, personalized ads and push notifications, pop-up product recommendations, etc.
Overvaluation of positive outcomes: Manipulating anticipated outcomes, such as by offering improved odds,), or all the things that boost FOMO (social networks that delete stories or other content after a certain time).
Features disrupting deliberation: Autoplay, “smart download” in streaming services, messages that convey a sense of urgency (like countdown meters in e-commerce), and anything else that encourages people to act quickly and interferes with reflection.
Partial goal fulfillment: The key example here is the notorious infinite scroll, but also other ways online platforms move the goalposts for success, including hard-to-reach video game rewards.
Cognitive biases: Other ways of promoting a sense of exclusive content, the importance of participation for popularity, or disguising wins as losses by playing a cheer for paying back $0.70 on a $1 bet.
This may not be the ideal classification, and it may not cut perfectly across different types of addictive behaviors. Some of these design feature definitions strike me as clearer than others. The deeper and more important point is that there are many pathways and processes that lead to addiction, and it’s good to recognize the psychological processes that are at play when we start to feel our self-control wane. For one person, FOMO might be more the problem. For others, it’s the disruption of notifications, autoplay, and other nudges that disrupt deliberation.
Personally, I find the latter—the disruption of deliberation—far more disturbing these days. I feel like, for years now, I’ve been in a never-ending game of whack-a-mole, constantly turning off notifications and other behavioral triggers on my phone and computer, which always manage to sneakily turn themselves back on. Bluesky, for now, doesn’t seem to have very good customization of notifications, which I think is a shortcoming for people who want to engage in tech mindfully and consciously. But it could be a heck of a lot worse!
The practical takeaway here, I think, is that by understanding these design tricks, we can make better decisions about which technologies we really need, learn how to manage them better, and recognize our personal triggers. It’s not so much that any specific technology is “addictive” versus not, but rather that different features have the capacity to erode self-control in certain ways, for certain people. I’d like to think that when we think of self-control in this thicker, less binary, more nuanced way, it helps us to see self-control as a deeply human phenomenon, not just a battle against addiction, and therefore to live more intentionally.
I’m starting to book guests for next year’s interviewees on Flourishing After Addiction. If there’s someone you want to recommend, drop a line or a comment!
Otherwise, thanks for being here and talk to you soon.
If you’re finding this newsletter useful, please consider sharing this episode with someone else you think would benefit.
I would love to know how you define addiction—I’ve read your book twice so I don’t mean that, I mean if there are different ways the internet addicts us, can you explain the different ways substances and behaviors do? Like some are dopaminergic, rewards based, do you have a simple way to break down types of addiction and the related behaviors substances? Does that make sense? 🫂
If you haven’t interviewed her yet, I recommend Tracie Gardner for your podcast - she’s ED of the newly launched National Black Harm Reduction Network: https://www.nbhrn.org/about