Steve Rose, PhD

Is Social Media Making Us Less Social?

social media makes us less social essay

Written by Steve Rose

Identity, purpose, and belonging, 15 comments(s).

On the go? Listen to the audio version of the article here:

In an age where we are becoming more connected through social media every day, it sometimes feels like we are also becoming less social.

Why go through all of the inconvenience of meeting up in person when you can simply catch up online?

Within the last decade, technology has profoundly shifted the nature of human communication.

Some say we are “hyper-social,” always connected and communicating with multiple people at the same time.  Others would say we have become “anti-social,” glued to our devices, and lacking interpersonal skills.  So which is it?

Is social media making us less social?

Social Media is making us less social when used to compare oneself to others, contributing to higher levels of loneliness and lower levels of well-being among frequent users. It can be social when used to connect with others.

Let’s take a look at the research.

Also, if you or someone you know is struggling with mental health issues, you can check out my  resource page  for suggestions on how to find help.

Social Media Contributes to Social Isolation

The first study looking at this phenomenon was published in 1998, around the time when many people were starting to use the internet.

The researchers followed 169 people during the first two years of their internet use to determine if this new technology made them more social or less social, finding:

“…greater use of the Internet was associated with declines in participants’ communication with family members in the household, declines in the size of their social circle, and increases in their depression and loneliness.”

This was seen as quite the paradox, given that the individuals were using the internet extensively as a communication technology.

A 2004 study comparing internet use to face-to-face interaction found a similar conclusion, stating:

…the Internet can decrease social well-being, even though it is often used as a communication tool.

Has anything changed since then?

Ten years later, a 2014 study  on college students suffering from internet addiction found:

Results show that excessive and unhealthy Internet use would increase feelings of loneliness over time…[.] This study also found that online social contacts with friends and family were not an effective alternative for offline social interactions in reducing feelings of loneliness.

In her recent book,  iGen , Jean Twenge writes about the generation born after 1994, finding high rates of mental health issues and isolation:

“A stunning 31% more 8th and 10th graders felt lonely in 2015 than in 2011, along with 22% more 12th graders”…[.] All in all, iGen’ers are increasingly disconnected from human relationships.

She argues the increasing level of screen-time and decreasing degree of in-person interaction leaves igen lacking social skills:

“In the next decade we may see more young people who know just the right emoji for a situation—but not the right facial expression.”

A 2016 study comments on this generational phenomenon, stating:

It is surprising then that, in spite of this enhanced interconnectivity, young adults may be lonelier than other age groups, and that the current generation may be the loneliest ever.

The correlation between internet use and isolation is fairly established in the literature. But let’s not paint the whole internet with the same brush.

A 2014 study  highlights the psychological costs and benefits derived from social media use, stating:

…online tools create a paradox for social connectedness. On one hand, they elevate the ease in which individuals may form and create online groups and communities, but on the other, they can create a source of alienation and ostracism.

It turns out the answer may be a bit more complicated.

Let’s take a look at the specific factors that make the difference.

Social Media Can Be Social (If used to connect)

A 2016 study with the apt subtitle, “Why an Instagram picture may be worth more than a thousand Twitter words,” finds that image-based social media platforms like Instagram and Snapchat may be able to decrease loneliness because of the higher levels of intimacy they provide.

Another 2016 study , specifically looking at Instagram use, found that it isn’t the platform that matters. It is the way the platform is used that matters.

The researchers studied Instagram use among 208 undergraduate students, finding there was one thing that made all the difference: “the social comparison orientation.”

What is social comparison orientation?

It’s when you compare yourself to others on social media. For example, you may find yourself passively scanning through an endless feed of finely curated photos, wishing you had a different body, a different job, a different  life !

It’s the sense that everyone has it better than you, and that you’re missing out on all of the best events, vacations, and products.

Students who rated high on social comparison orientation were more likely to widely broadcast their posts in an attempt to gain status. Students who rated low were more likely to use the platform to connect with others meaningfully.

A 2008 study on internet use among older adults supports this distinction, finding:

…greater use of the Internet as a communication tool was associated with a lower level of social loneliness. In contrast, greater use of the Internet to find new people was associated with a higher level of emotional loneliness.

Using the internet as a communication tool can decrease loneliness.

Experimental evidence in a 2004 study , highlights this by measuring a person’s level of loneliness throughout multiple intervals as they engage in an online chat. They concluded:

Internet use was found to decrease loneliness and depression significantly, while perceived social support and self-esteem increased significantly.

Although chatting online can decrease loneliness, what about using social media platforms to post status updates?

A 2012 study  conducted an experiment to determine if posting a Facebook status increases or decreases loneliness. Yes, this is an actual experiment.

The researchers told one group of participants to increase their number of status updates for one week. They didn’t give any instructions to a second control group. Results revealed:

(1) that the experimentally induced increase in status updating activity reduced loneliness, (2) that the decrease in loneliness was due to participants feeling more connected to their friends on a daily basis, and (3) that the effect of posting on loneliness was independent of direct social feedback (i.e., responses) by friends.  

These results may seem to contradict the previous finding that social media broadcasting is correlated with increased loneliness, but there is a crucial difference: the social comparison orientation.

In this experiment, the researchers did not differentiate between users who had high or low levels of social comparison. The users in the group being told to update their status more frequently were not told to scan their news feeds more often, nor was their social media use manipulated to alter their level of social comparison.

So what is the key lesson here?

Using social media in a way that connects us with others can make us less lonely and more social.

Unfortunately, as social media use increases, we are becoming lonelier.

This trend suggests we may not be using social media in the most social ways, comparing ourselves to others. In addition, we may be sacrificing in-person interaction for the convenience of social media interaction. Both of these factors increase the likelihood of experiencing social isolation.

If you are interested in reading more on the psychology of social media, you can check out my comprehensive post on the topic here: Why We Are Addicted To Social Media: The Psychology of Likes .

In that article, I go deep into the research on what keeps our brains hooked on social media likes and how you can use social media in a healthier way.

Fascinated by ideas? Check out my podcast:

Struggling with an addiction.

If you’re struggling with an addiction, it can be difficult to stop. Gaining short-term relief, at a long-term cost, you may start to wonder if it’s even worth it anymore. If you’re looking to make some changes, feel free to reach out. I offer individual addiction counselling to clients in the US and Canada. If you’re interested in learning more, you can send me a message here .

Other Mental Health Resources

If you are struggling with other mental health issues or are  looking for a specialist near you, use the Psychology Today therapist directory  here to find a practitioner who specializes in your area of concern.

If you require a lower-cost option, you can check out BetterHelp.com . It is one of the most flexible forms of online counseling.  Their main benefit is lower costs, high accessibility through their mobile app, and the ability to switch counselors quickly and easily, until you find the right fit.

*As an affiliate partner with Better Help, I receive a referral fee if you purchase products or services through the links provided.

As always, it is important to be critical when seeking help, since the quality of counselors are not consistent. If you are not feeling supported, it may be helpful to seek out another practitioner. I wrote an article on things to consider here .

Like this article? Join the mailing list to receive email updates when new ones are published:

social media makes us less social essay

You May Also Like…

The Power of Self Acceptance

The Power of Self Acceptance

Feb 27, 2024

Imagine finding yourself in a relentless cycle, where each mistake or setback plunges you deeper into a vortex of...

How to Overcome the Inner Critic

How to Overcome the Inner Critic

Imagine you're walking through your day, and there's a persistent whisper that follows you. It critiques every...

Self Care Is Not Selfish

Self Care Is Not Selfish

Feb 26, 2024

Imagine feeling constantly on edge, your energy reserves scraping the bottom, yet you push on, driven by a relentless...

15 Comments

taurusingemini

that’s just it, people often mistake being connected on a more personal level with the total number of “Friends” they have on FB or MySpace or whatever OTHER forms of social networking, and they often neglect to realize, that face-to-face interaction is what makes these connections between people more intimate…

Steve Rose

Exactly. Social media can supplement your social life if used to connect, but can’t be a substitute for it. Thanks for the comment! Great to connect with you again. It has been a while since I’ve posted.

Yeah but now, modern day people tend to use social media as their only FORM of connection, it’s like if you don’t exist on FB or other forms of social netowrking sites, you practctically, don’t exist at all!

With the trend toward increasing loneliness, it would for sure suggest social media is replacing in-person interaction.

odonnelljack52

one of the damning statistics on the recent programme Pllanet Children was 97% of primary school children were taken to school by an adult. They spend less time outside than those in prison. Our kids are getting fatter. They live in a bubble and social media swells that bubble and the vision of themselves becomes increasingly distorted. My grandkid loves phones because mum and dad always have their noses in their phones. The grandkid isn’t content with a kid-on phone. She wants the real one, and she’s just over a year old. We create our own hell, but our kids jump in with both feet. Why shouldn’t they? Mum and dad do it and it’s vastly entertaining. Social media swallows time. Why am I adding to it here? God knows.

Thanks for sharing this fact and your personal experience! I think you might be interested in this book on the subject of bubble wrapped children: Free-Range Kids, How to Raise Safe, Self-Reliant Children (Without Going Nuts with Worry)

Rosaliene Bacchus

Thanks for raising this issue, Steve. I’ve tried, without success, to arrange a lunch-meet with a dear friend–just half-hour away by bus–who has fallen victim to FB’s false promise of connection. Since I’ve long escaped from FB-addiction, I no longer know how she’s doing.

Glad to see you’ve been able to gain a sense of control! I hope your friend is well and wish her all the best.

Rev. Joe Jagodensky, SDS.

In a restaurant, I went to a couple both staring deeply and silently at their phones and said, “That’s true love.” They laughed.

lol! Nice one!

hatsunecato

Not up on the research, but it is fascinating. Might we be getting the correlation confused? Could it be that people who are more lonely are more likely to spend time on social media in search of connection? Is this controlled in the research?

From the research I’ve seen so far, it seems that social anxiety is the confounding variable between loneliness and increased social media use. Also, Jean Twange looks at this question in her book igen and finds that the research supports the hypothesis that social media use leads to increased loneliness. A couple of experiments I cited here use a control and don’t support that hypothesis, but they are fairly limited because they only look at narrow forms of social media use like status updates or chatting with an anonymous person.

Steve

Correctly said.

Trackbacks/Pingbacks

  • The Power of Social Connection | Steve Rose PhD - […] forms of addiction are especially focused on the “social” theme. My article, “Is Social Media Making us Less Social?”…

Leave a Reply Cancel reply

Find anything you save across the site in your account

All products are independently selected by our editors. If you buy something, we may earn an affiliate commission.

How Harmful Is Social Media?

By Gideon Lewis-Kraus

A socialmedia battlefield

In April, the social psychologist Jonathan Haidt published an essay in The Atlantic in which he sought to explain, as the piece’s title had it, “Why the Past 10 Years of American Life Have Been Uniquely Stupid.” Anyone familiar with Haidt’s work in the past half decade could have anticipated his answer: social media. Although Haidt concedes that political polarization and factional enmity long predate the rise of the platforms, and that there are plenty of other factors involved, he believes that the tools of virality—Facebook’s Like and Share buttons, Twitter’s Retweet function—have algorithmically and irrevocably corroded public life. He has determined that a great historical discontinuity can be dated with some precision to the period between 2010 and 2014, when these features became widely available on phones.

“What changed in the 2010s?” Haidt asks, reminding his audience that a former Twitter developer had once compared the Retweet button to the provision of a four-year-old with a loaded weapon. “A mean tweet doesn’t kill anyone; it is an attempt to shame or punish someone publicly while broadcasting one’s own virtue, brilliance, or tribal loyalties. It’s more a dart than a bullet, causing pain but no fatalities. Even so, from 2009 to 2012, Facebook and Twitter passed out roughly a billion dart guns globally. We’ve been shooting one another ever since.” While the right has thrived on conspiracy-mongering and misinformation, the left has turned punitive: “When everyone was issued a dart gun in the early 2010s, many left-leaning institutions began shooting themselves in the brain. And, unfortunately, those were the brains that inform, instruct, and entertain most of the country.” Haidt’s prevailing metaphor of thoroughgoing fragmentation is the story of the Tower of Babel: the rise of social media has “unwittingly dissolved the mortar of trust, belief in institutions, and shared stories that had held a large and diverse secular democracy together.”

These are, needless to say, common concerns. Chief among Haidt’s worries is that use of social media has left us particularly vulnerable to confirmation bias, or the propensity to fix upon evidence that shores up our prior beliefs. Haidt acknowledges that the extant literature on social media’s effects is large and complex, and that there is something in it for everyone. On January 6, 2021, he was on the phone with Chris Bail, a sociologist at Duke and the author of the recent book “ Breaking the Social Media Prism ,” when Bail urged him to turn on the television. Two weeks later, Haidt wrote to Bail, expressing his frustration at the way Facebook officials consistently cited the same handful of studies in their defense. He suggested that the two of them collaborate on a comprehensive literature review that they could share, as a Google Doc, with other researchers. (Haidt had experimented with such a model before.) Bail was cautious. He told me, “What I said to him was, ‘Well, you know, I’m not sure the research is going to bear out your version of the story,’ and he said, ‘Why don’t we see?’ ”

Bail emphasized that he is not a “platform-basher.” He added, “In my book, my main take is, Yes, the platforms play a role, but we are greatly exaggerating what it’s possible for them to do—how much they could change things no matter who’s at the helm at these companies—and we’re profoundly underestimating the human element, the motivation of users.” He found Haidt’s idea of a Google Doc appealing, in the way that it would produce a kind of living document that existed “somewhere between scholarship and public writing.” Haidt was eager for a forum to test his ideas. “I decided that if I was going to be writing about this—what changed in the universe, around 2014, when things got weird on campus and elsewhere—once again, I’d better be confident I’m right,” he said. “I can’t just go off my feelings and my readings of the biased literature. We all suffer from confirmation bias, and the only cure is other people who don’t share your own.”

Haidt and Bail, along with a research assistant, populated the document over the course of several weeks last year, and in November they invited about two dozen scholars to contribute. Haidt told me, of the difficulties of social-scientific methodology, “When you first approach a question, you don’t even know what it is. ‘Is social media destroying democracy, yes or no?’ That’s not a good question. You can’t answer that question. So what can you ask and answer?” As the document took on a life of its own, tractable rubrics emerged—Does social media make people angrier or more affectively polarized? Does it create political echo chambers? Does it increase the probability of violence? Does it enable foreign governments to increase political dysfunction in the United States and other democracies? Haidt continued, “It’s only after you break it up into lots of answerable questions that you see where the complexity lies.”

Haidt came away with the sense, on balance, that social media was in fact pretty bad. He was disappointed, but not surprised, that Facebook’s response to his article relied on the same three studies they’ve been reciting for years. “This is something you see with breakfast cereals,” he said, noting that a cereal company “might say, ‘Did you know we have twenty-five per cent more riboflavin than the leading brand?’ They’ll point to features where the evidence is in their favor, which distracts you from the over-all fact that your cereal tastes worse and is less healthy.”

After Haidt’s piece was published, the Google Doc—“Social Media and Political Dysfunction: A Collaborative Review”—was made available to the public . Comments piled up, and a new section was added, at the end, to include a miscellany of Twitter threads and Substack essays that appeared in response to Haidt’s interpretation of the evidence. Some colleagues and kibbitzers agreed with Haidt. But others, though they might have shared his basic intuition that something in our experience of social media was amiss, drew upon the same data set to reach less definitive conclusions, or even mildly contradictory ones. Even after the initial flurry of responses to Haidt’s article disappeared into social-media memory, the document, insofar as it captured the state of the social-media debate, remained a lively artifact.

Near the end of the collaborative project’s introduction, the authors warn, “We caution readers not to simply add up the number of studies on each side and declare one side the winner.” The document runs to more than a hundred and fifty pages, and for each question there are affirmative and dissenting studies, as well as some that indicate mixed results. According to one paper, “Political expressions on social media and the online forum were found to (a) reinforce the expressers’ partisan thought process and (b) harden their pre-existing political preferences,” but, according to another, which used data collected during the 2016 election, “Over the course of the campaign, we found media use and attitudes remained relatively stable. Our results also showed that Facebook news use was related to modest over-time spiral of depolarization. Furthermore, we found that people who use Facebook for news were more likely to view both pro- and counter-attitudinal news in each wave. Our results indicated that counter-attitudinal exposure increased over time, which resulted in depolarization.” If results like these seem incompatible, a perplexed reader is given recourse to a study that says, “Our findings indicate that political polarization on social media cannot be conceptualized as a unified phenomenon, as there are significant cross-platform differences.”

Interested in echo chambers? “Our results show that the aggregation of users in homophilic clusters dominate online interactions on Facebook and Twitter,” which seems convincing—except that, as another team has it, “We do not find evidence supporting a strong characterization of ‘echo chambers’ in which the majority of people’s sources of news are mutually exclusive and from opposite poles.” By the end of the file, the vaguely patronizing top-line recommendation against simple summation begins to make more sense. A document that originated as a bulwark against confirmation bias could, as it turned out, just as easily function as a kind of generative device to support anybody’s pet conviction. The only sane response, it seemed, was simply to throw one’s hands in the air.

When I spoke to some of the researchers whose work had been included, I found a combination of broad, visceral unease with the current situation—with the banefulness of harassment and trolling; with the opacity of the platforms; with, well, the widespread presentiment that of course social media is in many ways bad—and a contrastive sense that it might not be catastrophically bad in some of the specific ways that many of us have come to take for granted as true. This was not mere contrarianism, and there was no trace of gleeful mythbusting; the issue was important enough to get right. When I told Bail that the upshot seemed to me to be that exactly nothing was unambiguously clear, he suggested that there was at least some firm ground. He sounded a bit less apocalyptic than Haidt.

“A lot of the stories out there are just wrong,” he told me. “The political echo chamber has been massively overstated. Maybe it’s three to five per cent of people who are properly in an echo chamber.” Echo chambers, as hotboxes of confirmation bias, are counterproductive for democracy. But research indicates that most of us are actually exposed to a wider range of views on social media than we are in real life, where our social networks—in the original use of the term—are rarely heterogeneous. (Haidt told me that this was an issue on which the Google Doc changed his mind; he became convinced that echo chambers probably aren’t as widespread a problem as he’d once imagined.) And too much of a focus on our intuitions about social media’s echo-chamber effect could obscure the relevant counterfactual: a conservative might abandon Twitter only to watch more Fox News. “Stepping outside your echo chamber is supposed to make you moderate, but maybe it makes you more extreme,” Bail said. The research is inchoate and ongoing, and it’s difficult to say anything on the topic with absolute certainty. But this was, in part, Bail’s point: we ought to be less sure about the particular impacts of social media.

Bail went on, “The second story is foreign misinformation.” It’s not that misinformation doesn’t exist, or that it hasn’t had indirect effects, especially when it creates perverse incentives for the mainstream media to cover stories circulating online. Haidt also draws convincingly upon the work of Renée DiResta, the research manager at the Stanford Internet Observatory, to sketch out a potential future in which the work of shitposting has been outsourced to artificial intelligence, further polluting the informational environment. But, at least so far, very few Americans seem to suffer from consistent exposure to fake news—“probably less than two per cent of Twitter users, maybe fewer now, and for those who were it didn’t change their opinions,” Bail said. This was probably because the people likeliest to consume such spectacles were the sort of people primed to believe them in the first place. “In fact,” he said, “echo chambers might have done something to quarantine that misinformation.”

The final story that Bail wanted to discuss was the “proverbial rabbit hole, the path to algorithmic radicalization,” by which YouTube might serve a viewer increasingly extreme videos. There is some anecdotal evidence to suggest that this does happen, at least on occasion, and such anecdotes are alarming to hear. But a new working paper led by Brendan Nyhan, a political scientist at Dartmouth, found that almost all extremist content is either consumed by subscribers to the relevant channels—a sign of actual demand rather than manipulation or preference falsification—or encountered via links from external sites. It’s easy to see why we might prefer if this were not the case: algorithmic radicalization is presumably a simpler problem to solve than the fact that there are people who deliberately seek out vile content. “These are the three stories—echo chambers, foreign influence campaigns, and radicalizing recommendation algorithms—but, when you look at the literature, they’ve all been overstated.” He thought that these findings were crucial for us to assimilate, if only to help us understand that our problems may lie beyond technocratic tinkering. He explained, “Part of my interest in getting this research out there is to demonstrate that everybody is waiting for an Elon Musk to ride in and save us with an algorithm”—or, presumably, the reverse—“and it’s just not going to happen.”

When I spoke with Nyhan, he told me much the same thing: “The most credible research is way out of line with the takes.” He noted, of extremist content and misinformation, that reliable research that “measures exposure to these things finds that the people consuming this content are small minorities who have extreme views already.” The problem with the bulk of the earlier research, Nyhan told me, is that it’s almost all correlational. “Many of these studies will find polarization on social media,” he said. “But that might just be the society we live in reflected on social media!” He hastened to add, “Not that this is untroubling, and none of this is to let these companies, which are exercising a lot of power with very little scrutiny, off the hook. But a lot of the criticisms of them are very poorly founded. . . . The expansion of Internet access coincides with fifteen other trends over time, and separating them is very difficult. The lack of good data is a huge problem insofar as it lets people project their own fears into this area.” He told me, “It’s hard to weigh in on the side of ‘We don’t know, the evidence is weak,’ because those points are always going to be drowned out in our discourse. But these arguments are systematically underprovided in the public domain.”

In his Atlantic article, Haidt leans on a working paper by two social scientists, Philipp Lorenz-Spreen and Lisa Oswald, who took on a comprehensive meta-analysis of about five hundred papers and concluded that “the large majority of reported associations between digital media use and trust appear to be detrimental for democracy.” Haidt writes, “The literature is complex—some studies show benefits, particularly in less developed democracies—but the review found that, on balance, social media amplifies political polarization; foments populism, especially right-wing populism; and is associated with the spread of misinformation.” Nyhan was less convinced that the meta-analysis supported such categorical verdicts, especially once you bracketed the kinds of correlational findings that might simply mirror social and political dynamics. He told me, “If you look at their summary of studies that allow for causal inferences—it’s very mixed.”

As for the studies Nyhan considered most methodologically sound, he pointed to a 2020 article called “The Welfare Effects of Social Media,” by Hunt Allcott, Luca Braghieri, Sarah Eichmeyer, and Matthew Gentzkow. For four weeks prior to the 2018 midterm elections, the authors randomly divided a group of volunteers into two cohorts—one that continued to use Facebook as usual, and another that was paid to deactivate their accounts for that period. They found that deactivation “(i) reduced online activity, while increasing offline activities such as watching TV alone and socializing with family and friends; (ii) reduced both factual news knowledge and political polarization; (iii) increased subjective well-being; and (iv) caused a large persistent reduction in post-experiment Facebook use.” But Gentzkow reminded me that his conclusions, including that Facebook may slightly increase polarization, had to be heavily qualified: “From other kinds of evidence, I think there’s reason to think social media is not the main driver of increasing polarization over the long haul in the United States.”

In the book “ Why We’re Polarized ,” for example, Ezra Klein invokes the work of such scholars as Lilliana Mason to argue that the roots of polarization might be found in, among other factors, the political realignment and nationalization that began in the sixties, and were then sacralized, on the right, by the rise of talk radio and cable news. These dynamics have served to flatten our political identities, weakening our ability or inclination to find compromise. Insofar as some forms of social media encourage the hardening of connections between our identities and a narrow set of opinions, we might increasingly self-select into mutually incomprehensible and hostile groups; Haidt plausibly suggests that these processes are accelerated by the coalescence of social-media tribes around figures of fearful online charisma. “Social media might be more of an amplifier of other things going on rather than a major driver independently,” Gentzkow argued. “I think it takes some gymnastics to tell a story where it’s all primarily driven by social media, especially when you’re looking at different countries, and across different groups.”

Another study, led by Nejla Asimovic and Joshua Tucker, replicated Gentzkow’s approach in Bosnia and Herzegovina, and they found almost precisely the opposite results: the people who stayed on Facebook were, by the end of the study, more positively disposed to their historic out-groups. The authors’ interpretation was that ethnic groups have so little contact in Bosnia that, for some people, social media is essentially the only place where they can form positive images of one another. “To have a replication and have the signs flip like that, it’s pretty stunning,” Bail told me. “It’s a different conversation in every part of the world.”

Nyhan argued that, at least in wealthy Western countries, we might be too heavily discounting the degree to which platforms have responded to criticism: “Everyone is still operating under the view that algorithms simply maximize engagement in a short-term way” with minimal attention to potential externalities. “That might’ve been true when Zuckerberg had seven people working for him, but there are a lot of considerations that go into these rankings now.” He added, “There’s some evidence that, with reverse-chronological feeds”—streams of unwashed content, which some critics argue are less manipulative than algorithmic curation—“people get exposed to more low-quality content, so it’s another case where a very simple notion of ‘algorithms are bad’ doesn’t stand up to scrutiny. It doesn’t mean they’re good, it’s just that we don’t know.”

Bail told me that, over all, he was less confident than Haidt that the available evidence lines up clearly against the platforms. “Maybe there’s a slight majority of studies that say that social media is a net negative, at least in the West, and maybe it’s doing some good in the rest of the world.” But, he noted, “Jon will say that science has this expectation of rigor that can’t keep up with the need in the real world—that even if we don’t have the definitive study that creates the historical counterfactual that Facebook is largely responsible for polarization in the U.S., there’s still a lot pointing in that direction, and I think that’s a fair point.” He paused. “It can’t all be randomized control trials.”

Haidt comes across in conversation as searching and sincere, and, during our exchange, he paused several times to suggest that I include a quote from John Stuart Mill on the importance of good-faith debate to moral progress. In that spirit, I asked him what he thought of the argument, elaborated by some of Haidt’s critics, that the problems he described are fundamentally political, social, and economic, and that to blame social media is to search for lost keys under the streetlamp, where the light is better. He agreed that this was the steelman opponent: there were predecessors for cancel culture in de Tocqueville, and anxiety about new media that went back to the time of the printing press. “This is a perfectly reasonable hypothesis, and it’s absolutely up to the prosecution—people like me—to argue that, no, this time it’s different. But it’s a civil case! The evidential standard is not ‘beyond a reasonable doubt,’ as in a criminal case. It’s just a preponderance of the evidence.”

The way scholars weigh the testimony is subject to their disciplinary orientations. Economists and political scientists tend to believe that you can’t even begin to talk about causal dynamics without a randomized controlled trial, whereas sociologists and psychologists are more comfortable drawing inferences on a correlational basis. Haidt believes that conditions are too dire to take the hardheaded, no-reasonable-doubt view. “The preponderance of the evidence is what we use in public health. If there’s an epidemic—when COVID started, suppose all the scientists had said, ‘No, we gotta be so certain before you do anything’? We have to think about what’s actually happening, what’s likeliest to pay off.” He continued, “We have the largest epidemic ever of teen mental health, and there is no other explanation,” he said. “It is a raging public-health epidemic, and the kids themselves say Instagram did it, and we have some evidence, so is it appropriate to say, ‘Nah, you haven’t proven it’?”

This was his attitude across the board. He argued that social media seemed to aggrandize inflammatory posts and to be correlated with a rise in violence; even if only small groups were exposed to fake news, such beliefs might still proliferate in ways that were hard to measure. “In the post-Babel era, what matters is not the average but the dynamics, the contagion, the exponential amplification,” he said. “Small things can grow very quickly, so arguments that Russian disinformation didn’t matter are like COVID arguments that people coming in from China didn’t have contact with a lot of people.” Given the transformative effects of social media, Haidt insisted, it was important to act now, even in the absence of dispositive evidence. “Academic debates play out over decades and are often never resolved, whereas the social-media environment changes year by year,” he said. “We don’t have the luxury of waiting around five or ten years for literature reviews.”

Haidt could be accused of question-begging—of assuming the existence of a crisis that the research might or might not ultimately underwrite. Still, the gap between the two sides in this case might not be quite as wide as Haidt thinks. Skeptics of his strongest claims are not saying that there’s no there there. Just because the average YouTube user is unlikely to be led to Stormfront videos, Nyhan told me, doesn’t mean we shouldn’t worry that some people are watching Stormfront videos; just because echo chambers and foreign misinformation seem to have had effects only at the margins, Gentzkow said, doesn’t mean they’re entirely irrelevant. “There are many questions here where the thing we as researchers are interested in is how social media affects the average person,” Gentzkow told me. “There’s a different set of questions where all you need is a small number of people to change—questions about ethnic violence in Bangladesh or Sri Lanka, people on YouTube mobilized to do mass shootings. Much of the evidence broadly makes me skeptical that the average effects are as big as the public discussion thinks they are, but I also think there are cases where a small number of people with very extreme views are able to find each other and connect and act.” He added, “That’s where many of the things I’d be most concerned about lie.”

The same might be said about any phenomenon where the base rate is very low but the stakes are very high, such as teen suicide. “It’s another case where those rare edge cases in terms of total social harm may be enormous. You don’t need many teen-age kids to decide to kill themselves or have serious mental-health outcomes in order for the social harm to be really big.” He added, “Almost none of this work is able to get at those edge-case effects, and we have to be careful that if we do establish that the average effect of something is zero, or small, that it doesn’t mean we shouldn’t be worried about it—because we might be missing those extremes.” Jaime Settle, a scholar of political behavior at the College of William & Mary and the author of the book “ Frenemies: How Social Media Polarizes America ,” noted that Haidt is “farther along the spectrum of what most academics who study this stuff are going to say we have strong evidence for.” But she understood his impulse: “We do have serious problems, and I’m glad Jon wrote the piece, and down the road I wouldn’t be surprised if we got a fuller handle on the role of social media in all of this—there are definitely ways in which social media has changed our politics for the worse.”

It’s tempting to sidestep the question of diagnosis entirely, and to evaluate Haidt’s essay not on the basis of predictive accuracy—whether social media will lead to the destruction of American democracy—but as a set of proposals for what we might do better. If he is wrong, how much damage are his prescriptions likely to do? Haidt, to his great credit, does not indulge in any wishful thinking, and if his diagnosis is largely technological his prescriptions are sociopolitical. Two of his three major suggestions seem useful and have nothing to do with social media: he thinks that we should end closed primaries and that children should be given wide latitude for unsupervised play. His recommendations for social-media reform are, for the most part, uncontroversial: he believes that preteens shouldn’t be on Instagram and that platforms should share their data with outside researchers—proposals that are both likely to be beneficial and not very costly.

It remains possible, however, that the true costs of social-media anxieties are harder to tabulate. Gentzkow told me that, for the period between 2016 and 2020, the direct effects of misinformation were difficult to discern. “But it might have had a much larger effect because we got so worried about it—a broader impact on trust,” he said. “Even if not that many people were exposed, the narrative that the world is full of fake news, and you can’t trust anything, and other people are being misled about it—well, that might have had a bigger impact than the content itself.” Nyhan had a similar reaction. “There are genuine questions that are really important, but there’s a kind of opportunity cost that is missed here. There’s so much focus on sweeping claims that aren’t actionable, or unfounded claims we can contradict with data, that are crowding out the harms we can demonstrate, and the things we can test, that could make social media better.” He added, “We’re years into this, and we’re still having an uninformed conversation about social media. It’s totally wild.”

New Yorker Favorites

The Web site where millennial women judge one another’s spending habits .

Linda Ronstadt has found another voice .

A wedding ring that lost itself .

Did a scientist put millions of lives at risk—and was he right to do it ?

How the World’s 50 Best Restaurants are chosen .

The foreign students who saw Ukraine as a gateway to a better life .

An essay by Haruki Murakami: “ The Running Novelist .”

Sign up for our daily newsletter to receive the best stories from The New Yorker .

social media makes us less social essay

By signing up, you agree to our User Agreement and Privacy Policy & Cookie Statement . This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

A Suspense Novelist’s Trail of Deceptions

By Ian Parker

A Loaded Gun

By Patrick Radden Keefe

A Teen’s Fatal Plunge Into the London Underworld

By Nicholas Thompson

October 1, 2016

Social Technologies Are Making Us Less Social

For the first time in the history of our species, we are never alone and never bored. Have we lost something fundamental about being human?

By Mark Fischetti

social media makes us less social essay

Martin O'Neill

Chances are that you have a smartphone, Twitter and Instagram accounts, and a Facebook page and that you have found yourself ignoring a friend or family member who is in the same room as you because you are totally engrossed in your social technology. That technology means never having to feel alone or bored. Yet ironically, it can make us less attentive to the people closest to us and even make it hard for us to simply be with ourselves. Many of us are afraid to make this admission. “We're still in a romance with these technologies,” says Sherry Turkle of the Massachusetts Institute of Technology. “We're like young lovers who are afraid that talking about it will spoil it.” Turkle has interviewed, at length, hundreds of individuals of all ages about their interactions with smartphones, tablets, social media, avatars and robots. Unlike previous disruptive innovations such as the printing press or television, the latest “always on, always on you” technology, she says, threatens to undermine some basic human strengths that we need to thrive. In the conversation that follows, which has been edited for space, Turkle explains her concerns, as well as her cautious optimism that the youngest among us could actually resolve the challenges.

SCIENTIFIC AMERICAN:

What concerns you most about our constant interaction with our social technologies? TURKLE: One primary change I see is that people have a tremendous lack of tolerance for being alone. I do some of my fieldwork at stop signs, at checkout lines at supermarkets. Give people even a second, and they're doing something with their phone. Every bit of research says people's capacity to be alone is disappearing. What can happen is that you lose that moment to have a daydream or to cast an eye inward. Instead you look to the outside.

On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing . By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.

Is that an issue for individuals of all ages? Yes, but children especially need solitude. Solitude is the precondition for having a conversation with yourself. This capacity to be with yourself and discover yourself is the bedrock of development. But now, from the youngest age—even two, or three, or four—children are given technology that removes solitude by giving them something externally distracting. That makes it harder, ironically, to form true relationships.

Maybe people just don't want to be bored. People talk about never needing to have a lull. As soon as it occurs, they look at the phone; they get anxious. They haven't learned to have conversations or relationships, which involve lulls.

Are we valuing relationships less, then? People start to view other people in part as objects. Imagine two people on a date. “Hey, I have an idea. Instead of our just looking at each other face-to-face, why don't we each wear Google Glass, so if things get a little dull, I can just catch up on my e-mail? And you won't know.” This disrupts the family, too. When Boring Auntie starts to talk at the family dinner table, her little niece pulls out her phone and goes on Facebook. All of a sudden her world is populated with snowball fights and ballerinas. And dinner is destroyed. Dinner used to be the utopian ideal of the American family having a canonical three-generation gathering.

What about people who take their phones to bed? They're asleep, so why would they feel alone? I have interviewed enough middle school and high school kids: “So tell me, do you answer your texts in the middle of the night?” “Oh, yeah.” I call it “I share, therefore I am,” as a style of being.

If you're sharing in the middle of the night and responsive to people in the middle of the night, you're in a different zone. And all these people feel responsible to respond. The expectation is constant access. Everyone is ready to call in the advice and the consent of their peers. I did a case study of a young woman who has 2,000 followers on Instagram. She'll ask about a problem at 9:00 at night, and at 2:00 in the morning she's getting responses, and she's awake to get those responses. This is 2:00 in the morning for a lot of kids.

Where does this lead for someone who lives that way? If you don't call a halt to it, I think you don't fully develop a sense of an autonomous self. You're not able to be in personal relationships, business relationships, because you don't feel fully competent to handle major things on your own. You run into trouble if you're putting everything up, ultimately, for a vote.

You're crowdsourcing your life. You're crowdsourcing major decisions. I hope it's likely, however, that a person reaches a point where they're on a job—they're not twentysomething, they're thirtysomething—and this starts to become less comfortable, and they develop emotional skills that they really haven't worked on.

What about our interactions with automated personalities and robots? When we started looking at this in the 1970s, people took the position that even if simulated thinking might be thinking, simulated feeling was not feeling. Simulated love was never love. But that's gone away. People tell me that if Siri [the iPhone voice] could fool them a little better, they'd be happy to talk to Siri.

Isn't that like the movie Her? Absolutely. The current position seems to be that if there's a robot that could fool me into thinking that it understands me, I'm good to have it as a companion. This is a significant evolution in what we ask for in our interactions, even on intimate matters. I see it in kids. I see it in grown-ups. The new robots are designed to make you feel as though you're understood. Yet nobody is pretending that any of them understands anything.

What line does that cross—that there's no empathy? There's no authentic exchange. You're saying empathy is not important to the feeling of being understood. And yet I interviewed a woman who said to me that she's okay with a robot boyfriend. She wants one of these sophisticated Japanese robots. I looked at her and said, “You know that it doesn't understand you.” She said, “Look, I just want civility in the house. I just want something that will make me feel not alone.”

People are also good with a robot that could stand in as a companion for an older person. But I take a moral position here because older people deserve to tell the story of their life to someone who understands what a life is. They've lost spouses; they've lost children. We're suggesting they tell the story of their life to something that has no idea what a life is or what a loss is.

It's crucial to understand that this changing interaction is not just a story about technology. It's a story about how we are evolving when we're faced with something passive. I hope we're going to look closer at people's willingness to project humanity onto a robot and to accept a facade of empathy as the real thing because I think that such interactions are a dead end. We want more from technology and less from each other? Really?

Do avatars and virtual reality present the same issues? In these cases, we are moving from life to the mix of your real life and your virtual life. One young man put it very succinctly: “Real life is just one window, and it's not necessarily my best one.” People forgot about virtual reality for a while, but the acquisition of Oculus by Facebook raised it again—Mark Zuckerberg's fantasy that you will meet up with your friends in a virtual world where everybody looks like Angelina Jolie and Brad Pitt, you live in a beautiful home, and you present only what you want to present. We're evolving toward thinking of that as a utopian image.

But skeptics say your avatar is not different from the real you. Well, we do perform all the time. I'm trying to do my best Sherry Turkle right now. But it's a little different from me hanging out in my pajamas. What's different with an avatar or on Facebook is that you get to edit. A woman posts a photo of herself and then works on the color and background and lighting. Why? Because she wants it a certain way. We've never before been able to have it the way we wanted it. And now we can. People love that.

I asked an 18-year-old man, “What's wrong with conversation?” He said, “It takes place in real time. You can't control what you're going to say.” It was profound. That's also why a lot of people like to do their dealings on e-mail—it's not just the time shifting; it's that you basically can get it right.

One reason for the rise of humans is that functioning in groups gives each member a better chance to succeed. Will the move toward living online undermine those benefits? Oh, this is the question before us. Are we undermining, or are we enhancing our competitive advantage? A lot of my colleagues would say we're enhancing it. The Internet is giving us new ways of getting together, forming alliances. But I think we are at a point of inflection. While we were infatuated with the virtual, we dropped the ball on where we actually live. We need to balance how compelling the virtual is with the realities that we live in our bodies and on this planet. It is so easy for us to look the other way. Are we going to get out there and make our real communities what they should be?

Your critics say there's nothing to worry about because this “new technology” situation is not really new. We went through this with television—you know, TV is there to watch your kids so you don't have to. First of all, television can be a group exercise. I grew up in a family that sat around a TV and watched it together, fought about what was on the TV together, commented on it together. But when everybody watches their own show in their own room, so to speak, that stops. Technology that is always on and always on you—that is a quantum leap. I agree that there have been quantum leaps before: the book. The difference with “always on,” however, is that I really don't have a choice.

You mean, you could turn off the TV and still function. I cannot live my professional life or my personal life without my phone or my e-mail. My students can't even obtain their syllabus without it. We don't have an opt-out option from a world with this technology. The question is, How are we going to live a more meaningful life with something that is always on and always on you? And wait until it's in your ear, in your jacket, in your glasses.

So how do we resolve that? It's going to develop as some sort of common practice. I think companies will get involved, realizing that it actually isn't good for people to be constantly connected. Our etiquette will get involved; today if I get a message and don't get back to people in 24 hours, they're worried about me, or they're mad that I haven't replied. Why? I think we will change our expectation of having constant access.

Any suggestions for how we can get started? One argument I make is that there should be sacred spaces: the family dinner table, the car. Make these the places for conversation because conversation is the antidote to a lot of the issues I'm describing. If you're talking to your kids, if you're talking to your family, if you're talking to a community, these negative effects don't arise as much.

And we should be talking more about the technologies? My message is not antitechnology. It's pro conversation and pro the human spirit. It's really about calling into questions our dominant culture of more, better, faster. We need to assert what we need for our own thinking, for our own development, and for our relationships with our children, with our communities, with our intimate partners. As for the robots, I'm hoping that people will realize that what we're really disappointed in is ourselves. It's so upsetting to me. We're basically saying that we're not offering one another the conversation and the companionship. That, really, is the justification for talking to a robot that you know doesn't understand a word you're saying. We are letting each other down. It's not about the robots. It's about us.

So who is going to stop this train we are on? The most optimistic thing I see is the young people who have grown up with this technology but aren't smitten by it, who are willing to say, “Hold on a second.” They see the ways in which it's undermined life at school and life with their parents. This is where I'm guardedly hopeful.

I have so many examples of children who will be talking with their parents; something will come up, and the parent will go online to search, and the kid will say, “Daddy, stop Googling. I just want to talk to you.” When I go to the city park, I see kids go to the top of the jungle gym and call out, “Mommy, Mommy!” and they're being ignored. They object to being ignored when they're five, eight or nine. But when I interview these kids when they're 13, 14 or 15, they become reflective. They say, “I'm not going to bring up my children the way I'm being brought up.” They're going to have rules, like no phones at dinner.

I also see evidence that dealing with some of this technology is feeling to them like work—the whole notion that you have to constantly keep up your Facebook profile. So I think there's every possibility that the children will lead us. They see the costs. They think, “I don't have to give up this technology, but maybe I could be a little smarter about it.”

Mark Fischetti has been a senior editor at Scientific American for 17 years and has covered sustainability issues, including climate, weather, environment, energy, food, water, biodiversity, population, and more. He assigns and edits feature articles, commentaries and news by journalists and scientists and also writes in those formats. He edits History, the magazine's department looking at science advances throughout time. He was founding managing editor of two spinoff magazines: Scientific American Mind and Scientific American Earth 3.0 . His 2001 freelance article for the magazine, " Drowning New Orleans ," predicted the widespread disaster that a storm like Hurricane Katrina would impose on the city. His video What Happens to Your Body after You Die? , has more than 12 million views on YouTube. Fischetti has written freelance articles for the New York Times, Sports Illustrated, Smithsonian, Technology Review, Fast Company, and many others. He co-authored the book Weaving the Web with Tim Berners-Lee, inventor of the World Wide Web, which tells the real story of how the Web was created. He also co-authored The New Killer Diseases with microbiologist Elinor Levy. Fischetti is a former managing editor of IEEE Spectrum Magazine and of Family Business Magazine . He has a physics degree and has twice served as the Attaway Fellow in Civic Culture at Centenary College of Louisiana, which awarded him an honorary doctorate. In 2021 he received the American Geophysical Union's Robert C. Cowen Award for Sustained Achievement in Science Journalism, which celebrates a career of outstanding reporting on the Earth and space sciences. He has appeared on NBC's Meet the Press, CNN, the History Channel, NPR News and many news radio stations. Follow Fischetti on X (formerly Twitter) @markfischetti

  • Share full article

Advertisement

Supported by

The Future of Social Media Is a Lot Less Social

Facebook, TikTok and Twitter seem to be increasingly connecting users with brands and influencers. To restore a sense of community, some users are trying smaller social networks.

A Cubist-style illustration of a person holding a smartphone.

By Brian X. Chen

Brian X. Chen is the lead consumer technology writer and author of Tech Fix , a column about the social implications of the tech we use.

Nearly two decades ago, Facebook exploded on college campuses as a site for students to stay in touch. Then came Twitter, where people posted about what they had for breakfast, and Instagram, where friends shared photos to keep up with one another.

Today, Instagram and Facebook feeds are full of ads and sponsored posts. TikTok and Snapchat are stuffed with videos from influencers promoting dish soaps and dating apps. And soon, Twitter posts that gain the most visibility will come mostly from subscribers who pay for the exposure and other perks .

Social media is, in many ways, becoming less social. The kinds of posts where people update friends and family about their lives have become harder to see over the years as the biggest sites have become increasingly “corporatized.” Instead of seeing messages and photos from friends and relatives about their holidays or fancy dinners, users of Instagram, Facebook, TikTok, Twitter and Snapchat now often view professionalized content from brands, influencers and others that pay for placement.

The change has implications for large social networking companies and how people interact with one another digitally. But it also raises questions about a core idea: the online platform. For years, the notion of a platform — an all-in-one, public-facing site where people spent most of their time — reigned supreme. But as big social networks made connecting people with brands a priority over connecting them with other people, some users have started seeking community-oriented sites and apps devoted to specific hobbies and issues.

“Platforms as we knew them are over,” said Zizi Papacharissi, a communications professor at the University of Illinois-Chicago, who teaches courses on social media. “They have outlived their utility.”

The shift helps explain why some social networking companies, which continue to have billions of users and pull in billions of dollars in revenue, are now exploring new avenues of business. Twitter, which is owned by Elon Musk, has been pushing people and brands to pay $8 to $1,000 a month to become subscribers . Meta, the parent company of Facebook and Instagram, is moving into the immersive online world of the so-called metaverse .

For users, this means that instead of spending all their time on one or a few big social networks, some are gravitating toward smaller, more focused sites. These include Mastodon , which is essentially a Twitter clone sliced into communities; Nextdoor , a social network for neighbors to commiserate about quotidian issues like local potholes; and apps like Truth Social , which was started by former President Donald J. Trump and is viewed as a social network for conservatives.

“It’s not about choosing one network to rule them all — that is crazy Silicon Valley logic,” said Ethan Zuckerman, a professor of public policy at the University of Massachusetts Amherst. “The future is that you’re a member of dozens of different communities, because as human beings, that’s how we are.”

Twitter, which automatically responds to press inquiries with a poop emoji, did not have a comment about the evolution of social networking. Meta declined to comment, and TikTok did not respond to a request for comment. Snap, the maker of Snapchat, said that although its app had evolved, connecting people with their friends and family remained its primary function.

A shift to smaller, more focused networks was predicted years ago by some of social media’s biggest names, including Mark Zuckerberg, Meta’s chief executive, and Jack Dorsey, a founder of Twitter.

In 2019, Mr. Zuckerberg wrote in a Facebook post that private messaging and small groups were the fastest-growing areas of online communication. Mr. Dorsey, who stepped down as Twitter’s chief executive in 2021, has pushed for so-called decentralized social networks that give people control over the content they see and the communities they engage with. He has recently been posting on Nostr , a social media site based on this principle.

Over the last year, technologists and academics have also focused on smaller social networks. In a paper published last month and titled “The Three-Legged Stool: A Manifesto for a Smaller, Denser Internet,” Mr. Zuckerman and other academics outlined how future companies could run small networks at low costs.

They also suggested the creation of an app that essentially acts as a Swiss Army knife of social networks by allowing people to switch among the sites they use, including Twitter, Mastodon, Reddit and smaller networks. One such app, called Gobo and developed by MIT Media Lab and the University of Massachusetts Amherst, is set for release next month.

The tricky part for users is finding the newer, small networks because they are obscure. But broader social networks, like Mastodon or Reddit, often act as a gateway to smaller communities. When signing up for Mastodon, for example, people can choose a server from an extensive list , including those related to gaming, food and activism.

Eugen Rochko, Mastodon’s chief executive, said users were publishing over a billion posts a month across its communities and that there were no algorithms or ads altering people’s feeds.

One major benefit of small networks is that they create forums for specific communities, including people who are marginalized. Ahwaa , which was founded in 2011, is a social network for members of the L.G.B.T.Q. community in countries around the Persian Gulf where being gay is deemed illegal. Other small networks, like Letterboxd , an app for film enthusiasts to share their opinions on movies, are focused on special interests.

Smaller communities can also relieve some social pressure of using social media, especially for younger people. Over the last decade, stories have emerged — including in congressional hearings about the dangers of social media — about teenagers developing eating disorders after trying to live up to “Instagram perfect” photos and through watching videos on TikTok.

The idea that a new social media site might come along to be the one app for everyone appears unrealistic, experts say. When young people are done experimenting with a new network — such as BeReal, the photo-sharing app that was popular among teenagers last year but is now hemorrhaging millions of active users — they move on to the next one.

“They’re not going to be swayed by the first shiny platform that comes along,” Ms. Papacharissi said.

People’s online identities will become increasingly fragmented among multiple sites, she added. For talking about professional accomplishments, there’s LinkedIn. For playing video games with fellow gamers, there’s Discord . For discussing news stories, there’s Artifact.

“What we’re interested in is smaller groups of people who are communicating with each other about specific things,” Ms. Papacharissi said.

More small networks are likely on the horizon. Last year, Harvard University, where Mr. Zuckerberg founded Facebook in 2004 as a student, began a research program devoted to rebooting social media. The program helps students and others create and experiment with new networks together.

One app that emerged from the program, Minus , lets users publish only 100 posts on their timeline for life. The idea is to make people feel connected in an environment where their time together is treated as a precious and finite resource, unlike traditional social networks such as Facebook and Twitter that use infinite scrolling interfaces to keep users engaged for as long as possible.

“It’s a performance art experiment,” said Jonathan Zittrain, a professor of law and computer science at Harvard who started the research initiative. “It’s the kind of thing that as soon as you see it, it doesn’t have to be this way.”

Brian X. Chen is the lead consumer technology writer for The Times. He reviews products and writes Tech Fix , a column about the social implications of the tech we use. Before joining The Times in 2011, he reported on Apple and the wireless industry for Wired. More about Brian X. Chen

Tech Fix: Solving Your Tech Problems

Brian x. chen, our lead consumer technology writer, looks at the societal implications of the tech we use..

Managing Subscriptions: The dream of streaming — watch what you want, whenever you want, for a sliver of the price of cable! — is coming to an end as prices go up. Here’s how to juggle all your subscriptions and even cancel them .

Apple’s Vision Pro: The new headset  teaches a valuable lesson about the cost of tech products: The upsells and add-ons will get you .  

Going Old School: Retro-photography apps that mimic the appearance of analog film formats make your digital files seem like they’re from another era. Here’s how to use them .

Cut Down Your Screen Time:  Worried about smartphone addiction? Here’s how to cut down on your screen time , and here’s how to quit your smartphone entirely .

A New Age of Surveillance:  Meta’s $300 smart glasses can inconspicuously take photos and record videos. They also offer a glimpse into a future with less privacy and more distraction .

Green and Blue Bubbles: Apple announced that it would improve the technology used to send texts between iPhone and Android users. But the bubble culture war is far from over .

  • Entertainment
  • Environment
  • Information Science and Technology
  • Social Issues

Home Essay Samples Sociology Effects of Social Media

The Paradox of Social Media: How It Makes Us Less Social

Table of contents, the promise of connectivity, the erosion of face-to-face interaction, the illusion of connection, striking a balance.

  • Cummings, J. N., & Kraut, R. (2002). Domesticating computers and the Internet. The Information Society, 18(3), 221-231.
  • Hampton, K. N., & Wellman, B. (2003). Neighboring in Netville: How the Internet supports community and social capital in a wired suburb. City & Community, 2(4), 277-311.
  • McEwan, B. (2017). Social Media: Enduring Principles. Oxford University Press.
  • Turkle, S. (2015). Reclaiming conversation: The power of talk in a digital age. Penguin.
  • Vitak, J., Ellison, N. B., & Steinfield, C. (2011). The ties that bond: Re-examining the relationship between Facebook use and bonding social capital. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 1601-1610).

*minimum deadline

Cite this Essay

To export a reference to this article please select a referencing style below

writer logo

  • Social Conflicts
  • Individual Identity
  • Parent-Child Relationship

Related Essays

Need writing help?

You can always rely on us no matter what type of paper you need

*No hidden charges

100% Unique Essays

Absolutely Confidential

Money Back Guarantee

By clicking “Send Essay”, you agree to our Terms of service and Privacy statement. We will occasionally send you account related emails

You can also get a UNIQUE essay on this or any other topic

Thank you! We’ll contact you as soon as possible.

social media makes us less social essay

Does Social Media Make People Less Social?: A Research Note

social media makes us less social essay

“Social media is a really powerful tool for people of common interests to convene and   get to know each other and connect. But then it’s important for them to get offline,  meet in a pub, meet at a place of worship, meet in a neighbourhood and get to know   each other.”

The former president’s statements about social media are agreeable and measured. They don’t evoke moral panic, but they do offer a clear warning about the rise of new technologies and potential fall of social relations.

These sentiments feel comfortable and familiar. Indeed, the sober cautioning that digital media ought not replace face-to-face interaction has emerged as a widespread truism, and for valid reasons. Shared corporality holds distinct qualities that make it valuable and indispensable for human social connection. With the ubiquity of digital devices and mediated social platforms, it is wise to think about how these new forms of community and communication affect social relations, including their impact on local venues where people have traditionally gathered. It is also reasonable to believe that social media pose a degree of threat to community social life, one that individuals in society should actively ward off.

However, just because something is reasonable to believe doesn’t mean it’s true. The relationship between social media and social relations is not a foregone conclusion but an empirical question: does social media make people less social? Luckily, scholars have spent a good deal of time collecting cross-disciplinary evidence from which to draw conclusions. Let’s look at the research:

In a germinal work from 2007 , communication scholars Nicole Ellison and colleagues establish a clear link between Facebook use and college students’ social capital. Using survey data, the authors show that Facebook usage positively relates to forming new connections, deepening existing connections, and maintaining connections with dispersed networks (bridging, bonding, and maintaining social capital, respectively). Ellison and her team repeated similar findings in 2011 and again in 2014 .   Burke, Marlow and Lento showed further support for a link between social media and social capital based on a combination of Facebook server logs and participant self-reports, demonstrating that direct interactions through social media help bridge social ties.

Out of sociology, network analyses show social media use associated with expanding social networks and increased social opportunities. Responding directly to Robert Putnam’s harrowing Bowling Alone thesis, Keith Hampton, Chul-Joo Lee and Eun Ja Her report on a range of information communication technologies (ICTs) including mobile phones, blogs, social network sites and photo sharing platforms. They find that these ICTs directly and indirectly increase network diversity and do so by encouraging participation in “traditional” settings such as neighbourhood groups, voluntary organizations, religious institutions and public social venues—i.e., the pubs and places of worship Obama touted above. Among older adults, a 2017 study by Anabel Quan-Haase, Guang Ying Mo and Barry Wellman shows that seniors use ICTs to obtain social support and foster various forms of companionship, including arranging in-person visits, thus mitigating the social isolation that too often accompanies old age. .

From psychology, researchers repeatedly show a relationship between “personality” and social media usage. For example, separate studies by Teresa Correa et al . and Samuel Gosling and colleagues show that those who are more social offline and define themselves as “extraverts” are also more active on social media. Summarizing this trend, Gosling et al. conclude that “[social media] users appear to extend their offline personalities into the domains of online social networks”. That is, people who are outgoing and have lots of friends continue to be outgoing and have lots of friends. They don’t replace one form of interaction with another, but continue interaction patterns across and between the digital and physical. This also means that people who are generally less social remain less social online. However, this is not an effect of the medium, it is an effect of their existing style of social interaction.

In short, the research shows that social media help build and maintain social relationships, supplement and support face-to-face interaction, and reflect existing socializing styles rather than eroding social skills. That is, ICTs supplement (and at times, enhance) interaction rather than displace it. These supplements and enhancements move between online and offline, as users reinforce relationships in between face-to-face engagements, coordinate plans to meet up, and connect online amidst physical and geographic barriers. Learn how to buy supplements on Amazon.com .

Of course, the picture isn’t entirely rosy. Social media open the doors to new levels and types of bullying, misinformation runs rampant, and the affordances of major platforms like Facebook may well make people feel bad about themselves . But, from the research, it doesn’t seem like social media is making anybody stay home.

Perhaps it is time to retire the sage warning that too many glowing screens will lead to empty bar stools and café counters. The common advice that social media is best used in moderation, and only so long as users keep engaging face-to-face isn’t negated by the research, but shown irrelevant—people are using social media to facilitate, augment, and supplement face-to-face interaction. There’s enough to worry over in this world, thanks to the research, we can take mass social isolation off the list.

Jenny Davis is on Twitter @Jenny_L_Davis

Headline pic via: Source

  • About Cyborgology

We live in a cyborg society. Technology has infiltrated the most fundamental aspects of our lives: social organization, the body, even our self-concepts. This blog chronicles our new, augmented reality.

  • Comment Moderation
  • Editors & Authors
  • Ephemerality and Social Media Bibliography
  • Theorizing The Web

CC Attribution Non-Commercial Share Alike

Yes, Social Media Really Is Undermining Democracy

Despite what Meta has to say.

An American flag being punctured by computer cursors

W ithin the past 15 years, social media has insinuated itself into American life more deeply than food-delivery apps into our diets and microplastics into our bloodstreams. Look at stories about conflict, and it’s often lurking in the background. Recent articles on the rising dysfunction within progressive organizations point to the role of Twitter, Slack, and other platforms in prompting “endless and sprawling internal microbattles,” as The Intercept ’s Ryan Grim put it, referring to the ACLU. At a far higher level of conflict, the congressional hearings about the January 6 insurrection show us how Donald Trump’s tweets summoned the mob to Washington and aimed it at the vice president. Far-right groups then used a variety of platforms to coordinate and carry out the attack.

Social media has changed life in America in a thousand ways, and nearly two out of three Americans now believe that these changes are for the worse. But academic researchers have not yet reached a consensus that social media is harmful. That’s been a boon to social-media companies such as Meta, which argues, as did tobacco companies, that the science is not “ settled .”

The lack of consensus leaves open the possibility that social media may not be very harmful. Perhaps we’ve fallen prey to yet another moral panic about a new technology and, as with television, we’ll worry about it less after a few decades of conflicting studies. A different possibility is that social media is quite harmful but is changing too quickly for social scientists to capture its effects. The research community is built on a quasi-moral norm of skepticism: We begin by assuming the null hypothesis (in this case, that social media is not harmful), and we require researchers to show strong, statistically significant evidence in order to publish their findings. This takes time—a couple of years, typically, to conduct and publish a study; five or more years before review papers and meta-analyses come out; sometimes decades before scholars reach agreement. Social-media platforms, meanwhile, can change dramatically in just a few years .

So even if social media really did begin to undermine democracy (and institutional trust and teen mental health ) in the early 2010s, we should not expect social science to “settle” the matter until the 2030s. By then, the effects of social media will be radically different, and the harms done in earlier decades may be irreversible.

Let me back up. This spring, The Atlantic published my essay “ Why the Past 10 Years of American Life Have Been Uniquely Stupid ,” in which I argued that the best way to understand the chaos and fragmentation of American society is to see ourselves as citizens of Babel in the days after God rendered them unable to understand one another.

I showed how a few small changes to the architecture of social-media platforms, implemented from 2009 to 2012, increased the virality of posts on those platforms, which then changed the nature of social relationships. People could spread rumors and half-truths more quickly, and they could more readily sort themselves into homogenous tribes. Even more important, in my view, was that social-media platforms such as Twitter and Facebook could now be used more easily by anyone to attack anyone. It was as if the platforms had passed out a billion little dart guns, and although most users didn’t want to shoot anyone, three kinds of people began darting others with abandon: the far right, the far left, and trolls.

Jonathan Haidt and Tobias Rose-Stockwell: The dark psychology of social networks

All of these groups were suddenly given the power to dominate conversations and intimidate dissenters into silence. A fourth group—Russian agents––also got a boost, though they didn’t need to attack people directly. Their long-running project, which ramped up online in 2013, was to fabricate, exaggerate, or simply promote stories that would increase Americans’ hatred of one another and distrust of their institutions.

The essay proved to be surprisingly uncontroversial—or, at least, hardly anyone attacked me on social media. But a few responses were published, including one from Meta (formerly Facebook), which pointed to studies it said contradicted my argument. There was also an essay in The New Yorker by Gideon Lewis-Kraus, who interviewed me and other scholars who study politics and social media. He argued that social media might well be harmful to democracies, but the research literature is too muddy and contradictory to support firm conclusions.

So was my diagnosis correct, or are concerns about social media overblown? It’s a crucial question for the future of our society. As I argued in my essay, critics make us smarter. I’m grateful, therefore, to Meta and the researchers interviewed by Lewis-Kraus for helping me sharpen and extend my argument in three ways.

Are Democracies Becoming More Polarized and Less Healthy?

My essay laid out a wide array of harms that social media has inflicted on society. Political polarization is just one of them, but it is central to the story of rising democratic dysfunction.

Meta questioned whether social media should be blamed for increased polarization. In response to my essay, Meta’s head of research, Pratiti Raychoudhury, pointed to a study by Levi Boxell, Matthew Gentzkow, and Jesse Shapiro that looked at trends in 12 countries and found, she said, “that in some countries polarization was on the rise before Facebook even existed, and in others it has been decreasing while internet and Facebook use increased.” In a recent interview with the podcaster Lex Fridman , Mark Zuckerberg cited this same study in support of a more audacious claim: “Most of the academic studies that I’ve seen actually show that social-media use is correlated with lower polarization.”

Does that study really let social media off the hook? It plotted political polarization based on survey responses in 12 countries, most with data stretching back to the 1970s, and then drew straight lines that best fit the data points over several decades. It’s true that, while some lines sloped upward (meaning that polarization increased across the period as a whole), others sloped downward. But my argument wasn’t about the past 50 years. It was about a phase change that happened in the early 2010s , after Facebook and Twitter changed their architecture to enable hyper-virality.

I emailed Gentzkow to ask whether he could put a “hinge” in the graphs in the early 2010s, to see if the trends in polarization changed direction or accelerated in the past decade. He replied that there was not enough data after 2010 to make such an analysis reliable. He also noted that Meta’s response essay had failed to cite a 2020 article in which he and three colleagues found that randomly assigning participants to deactivate Facebook for the four weeks before the 2018 U.S. midterm elections reduced polarization.

Adrienne LaFrance: ‘History will not judge us kindly’

Meta’s response motivated me to look for additional publications to evaluate what had happened to democracies in the 2010s. I discovered four. One of them found no overall trend in polarization, but like the study by Boxell, Gentzkow, and Shapiro, it had few data points after 2015. The other three had data through 2020, and all three reported substantial increases in polarization and/or declines in the number or quality of democracies around the world.

One of them, a 2022 report from the Varieties of Democracy (V-Dem) Institute, found that “liberal democracies peaked in 2012 with 42 countries and are now down to the lowest levels in over 25 years.” It summarized the transformations of global democracy over the past 10 years in stark terms:

Just ten years ago the world looked very different from today. In 2011, there were more countries improving than declining on every aspect of democracy. By 2021 the world has been turned on its head: there are more countries declining than advancing on nearly all democratic aspects captured by V-Dem measures.

The report also notes that “toxic polarization”—signaled by declining “respect for counter-arguments and associated aspects of the deliberative component of democracy”—grew more severe in at least 32 countries.

A paper published one week after my Atlantic essay, by Yunus E. Orhan, found a global spike in democratic “backsliding” since 2008, and linked it to affective polarization, or animosity toward the other side. When affective polarization is high, partisans tolerate antidemocratic behavior by politicians on their own side––such as the January 6 attack on the U.S. Capitol.

And finally, the Economist Intelligence Unit reported a global decline in various democratic measures starting after 2015, according to its Democracy Index.

These three studies cannot prove that social media caused the global decline, but—contra Meta and Zuckerberg—they show a global trend toward polarization in the previous decade, the one in which the world embraced social media.

Has Social Media Created Harmful Echo Chambers?

So why did democracies weaken in the 2010s? How might social media have made them more fragmented and less stable? One popular argument contends that social media sorts users into echo chambers––closed communities of like-minded people. Lack of contact with people who hold different viewpoints allows a sort of tribal groupthink to take hold, reducing the quality of everyone’s thinking and the prospects for compromise that are essential in a democratic system.

According to Meta, however, “More and more research discredits the idea that social media algorithms create an echo chamber.” It points to two sources to back up that claim, but many studies show evidence that social media does in fact create echo chambers. Because conflicting studies are common in social-science research, I created a “ collaborative review ” document last year with Chris Bail, a sociologist at Duke University who studies social media. It’s a public Google doc in which we organize the abstracts of all the studies we can find about social media’s impact on democracy, and then we invite other experts to add studies, comments, and criticisms. We cover research on seven different questions, including whether social media promotes echo chambers. After spending time in the document, Lewis-Kraus wrote in The New Yorker : “The upshot seemed to me to be that exactly nothing was unambiguously clear.”

He is certainly right that nothing is unambiguous. But as I have learned from curating three such documents , researchers often reach opposing conclusions because they have “operationalized” the question differently. That is, they have chosen different ways to turn an abstract question (about the prevalence of echo chambers, say) into something concrete and measurable. For example, researchers who choose to measure echo chambers by looking at the diversity of people’s news consumption typically find little evidence that they exist at all. Even partisans end up being exposed to news stories and videos from the other side. Both of the sources that Raychoudhury cited in her defense of Meta mention this idea.

Derek Thompson: Social media is attention alcohol

But researchers who measure echo chambers by looking at social relationships and networks usually find evidence of “homophily”—that is, people tend to engage with others who are similar to themselves. One study of politically engaged Twitter users, for example, found that they “are disproportionately exposed to like-minded information and that information reaches like-minded users more quickly.” So should we throw up our hands and say that the findings are irreconcilable? No, we should integrate them, as the sociologist Zeynep Tufekci did in a 2018 essay . Coming across contrary viewpoints on social media, she wrote, is “not like reading them in a newspaper while sitting alone.” Rather, she said, “it’s like hearing them from the opposing team while sitting with our fellow fans in a football stadium … We bond with our team by yelling at the fans of the other one.” Mere exposure to different sources of news doesn’t automatically break open echo chambers; in fact, it can reinforce them.

These closely bonded groupings can have profound political ramifications, as a couple of my critics in the New Yorker article acknowledged. A major feature of the post-Babel world is that the extremes are now far louder and more influential than before. They may also become more violent. Recent research by Morteza Dehghani and his colleagues at the University of Southern California shows that people are more willing to commit violence when they are immersed in a community they perceive to be morally homogeneous.

This finding seems to be borne out by a statement from the 18-year-old man who recently killed 10 Black Americans at a supermarket in Buffalo. In the Q&A portion of the manifesto attributed to him, he wrote:

Where did you get your current beliefs? Mostly from the internet. There was little to no influence on my personal beliefs by people I met in person.

The killer goes on to claim that he had read information “from all ideologies,” but I find it unlikely that he consumed a balanced informational diet, or, more important, that he hung out online with ideologically diverse users. The fact that he livestreamed his shooting tells us he assumed that his community shared his warped worldview. He could not have found such an extreme yet homogeneous group in his small town 200 miles from Buffalo. But thanks to social media, he found an international fellowship of extreme racists who jointly worshipped past mass murderers and from whom he copied sections of his manifesto.

Is Social Media the Primary Villain in This Story?

In her response to my essay, Raychoudhury did not deny that Meta bore any blame. Rather, her defense was two-pronged, arguing that the research is not yet definitive, and that, in any case, we should be focusing on mainstream media as the primary cause of harm.

Raychoudhury pointed to a study on the role of cable TV and mainstream media as major drivers of partisanship. She is correct to do so: The American culture war has roots going back to the turmoil of the 1960s, which activated evangelicals and other conservatives in the ’70s. Social media (which arrived around 2004 and became truly pernicious, I argue, only after 2009) is indeed a more recent player in this phenomenon.

In my essay, I included a paragraph on this backstory, noting the role of Fox News and the radicalizing Republican Party of the ’90s, but I should have said more. The story of polarization is complex, and political scientists cite a variety of contributing factors , including the growing politicization of the urban-rural divide; rising immigration; the increasing power of big and very partisan donors; the loss of a common enemy when the Soviet Union collapsed; and the loss of the “Greatest Generation,” which had an ethos of service forged in the crisis of the Second World War. And although polarization rose rapidly in the 2010s, the rise began in the ’90s, so I cannot pin the majority of the rise on social media.

But my essay wasn’t primarily about ordinary polarization. I was trying to explain a new dynamic that emerged in the 2010s: the fear of one another , even—and perhaps especially––within groups that share political or cultural affinities. This fear has created a whole new set of social and political problems.

The loss of a common enemy and those other trends with roots in the 20th century can help explain America’s ever nastier cross-party relationships, but they can’t explain why so many college students and professors suddenly began to express more fear, and engage in more self-censorship, around 2015. These mostly left-leaning people weren’t worried about the “other side”; they were afraid of a small number of students who were further to the left, and who enthusiastically hunted for verbal transgressions and used social media to publicly shame offenders.

A few years later, that same fearful dynamic spread to newsrooms , companies , nonprofit organizations , and many other parts of society . The culture war had been running for two or three decades by then, but it changed in the mid-2010s when ordinary people with little to no public profile suddenly became the targets of social-media mobs. Consider the famous 2013 case of Justine Sacco , who tweeted an insensitive joke about her trip to South Africa just before boarding her flight in London and became an international villain by the time she landed in Cape Town. She was fired the next day. Or consider the the far right’s penchant for using social media to publicize the names and photographs of largely unknown local election officials, health officials, and school-board members who refuse to bow to political pressure, and who are then subjected to waves of vitriol, including threats of violence to themselves and their children, simply for doing their jobs. These phenomena, now common to the culture, could not have happened before the advent of hyper-viral social media in 2009.

Matthew Hindman, Nathaniel Lubin, and Trevor Davis: Facebook has a superuser-supremacy problem

This fear of getting shamed, reported, doxxed, fired, or physically attacked is responsible for the self-censorship and silencing of dissent that were the main focus of my essay. When dissent within any group or institution is stifled, the group will become less perceptive, nimble, and effective over time.

Social media may not be the primary cause of polarization, but it is an important cause, and one we can do something about. I believe it is also the primary cause of the epidemic of structural stupidity, as I called it, that has recently afflicted many of America’s key institutions.

What Can We Do to Make Things Better?

My essay presented a series of structural solutions that would allow us to repair some of the damage that social media has caused to our key democratic and epistemic institutions. I proposed three imperatives: (1) harden democratic institutions so that they can withstand chronic anger and mistrust, (2) reform social media so that it becomes less socially corrosive, and (3) better prepare the next generation for democratic citizenship in this new age.

I believe that we should begin implementing these reforms now, even if the science is not yet “settled.” Beyond a reasonable doubt is the appropriate standard of evidence for reviewers guarding admission to a scientific journal, or for jurors establishing guilt in a criminal trial. It is too high a bar for questions about public health or threats to the body politic. A more appropriate standard is the one used in civil trials: the preponderance of evidence. Is social media probably damaging American democracy via at least one of the seven pathways analyzed in our collaborative-review document , or probably not ? I urge readers to examine the document themselves. I also urge the social-science community to find quicker ways to study potential threats such as social media, where platforms and their effects change rapidly. Our motto should be “Move fast and test things.” Collaborative-review documents are one way to speed up the process by which scholars find and respond to one another’s work.

Beyond these structural solutions, I considered adding a short section to the article on what each of us can do as individuals, but it sounded a bit too preachy, so I cut it. I now regret that decision. I should have noted that all of us, as individuals, can be part of the solution by choosing to act with courage, moderation, and compassion. It takes a great deal of resolve to speak publicly or stand your ground when a barrage of snide, disparaging, and otherwise hostile comments is coming at you and nobody rises to your defense (out of fear of getting attacked themselves).

Read: How to fix Twitter—and all of social media

Fortunately, social media does not usually reflect real life, something that more people are beginning to understand. A few years ago, I heard an insight from an older business executive. He noted that before social media, if he received a dozen angry letters or emails from customers, they spurred him to action because he assumed that there must be a thousand other disgruntled customers who didn’t bother to write. But now, if a thousand people like an angry tweet or Facebook post about his company, he assumes that there must be a dozen people who are really upset.

Seeing that social-media outrage is transient and performative should make it easier to withstand, whether you are the president of a university or a parent speaking at a school-board meeting. We can all do more to offer honest dissent and support the dissenters within institutions that have become structurally stupid. We can all get better at listening with an open mind and speaking in order to engage another human being rather than impress an audience. Teaching these skills to our children and our students is crucial, because they are the generation who will have to reinvent deliberative democracy and Tocqueville’s “art of association” for the digital age.

We must act with compassion too. The fear and cruelty of the post-Babel era are a result of its tendency to reward public displays of aggression. Social media has put us all in the middle of a Roman coliseum, and many in the audience want to see conflict and blood. But once we realize that we are the gladiators—tricked into combat so that we might generate “content,” “engagement,” and revenue—we can refuse to fight. We can be more understanding toward our fellow citizens, seeing that we are all being driven mad by companies that use largely the same set of psychological tricks. We can forswear public conflict and use social media to serve our own purposes, which for most people will mean more private communication and fewer public performances.

The post-Babel world will not be rebuilt by today’s technology companies. That work will be left to citizens who understand the forces that brought us to the verge of self-destruction, and who develop the new habits, virtues, technologies, and shared narratives that will allow us to reap the benefits of living and working together in peace.

Home / Essay Samples / Sociology / Effects of Social Media / Social Media is Making Us Less Social

Social Media is Making Us Less Social

  • Category: Sociology , Entertainment
  • Topic: Effects of Social Media , Social Media

Pages: 1 (398 words)

  • Downloads: -->

Counterarguments and Conclusion

--> ⚠️ Remember: This essay was written and uploaded by an--> click here.

Found a great essay sample but want a unique one?

are ready to help you with your essay

You won’t be charged yet!

Anime Essays

Maus Essays

Social Media Essays

Tarzan Essays

Dance Performance Review Essays

Related Essays

We are glad that you like it, but you cannot copy from our website. Just insert your email and this sample will be sent to you.

By clicking “Send”, you agree to our Terms of service  and  Privacy statement . We will occasionally send you account related emails.

Your essay sample has been sent.

In fact, there is a way to get an original essay! Turn to our writers and order a plagiarism-free paper.

samplius.com uses cookies to offer you the best service possible.By continuing we’ll assume you board with our cookie policy .--> -->