Psychologists do not seem to agree with what technology contributes to our well-being. Some say that digital devices have become a bane of modern life; Others say they are balm for it. In between lies a shady landscape without consensus: as the director of the National Institutes of Health recently told congress, exploring the impact of technology on our thoughts, behavior and development has led to limited and often contradictory results.
Since this uncertainty was not annoying enough, many of these results came from the same source: huge datasets that compile survey data from thousands or even millions of participants. "The problem is that two researchers can see the same data and get completely different insights and regulations for society," says psychologist Andrew Przybylski, research director at the Oxford Internet Institute. "Technological optimists tend to find positive correlations, and if they are pessimists, they tend to find negative ones."
In the latest issue of Nature Human Behavior Przybylski and co-author Amy Orben use a novel statistical method that shows why scientists studying these colossal datasets have achieved so different results, and why the Most club researchers positive and negative alike are very small ̵
the millennium cohort study. The study is a continuing study of the long-term health outcomes of more than 200,000 Americans. It contains dozens of questions whose answers a researcher could reasonably interpret as relevant to a person's well-being. These questions include topics as diverse as self-esteem, suicidal thoughts and overall life satisfaction. "However, different researchers have different ideas of well-being and can choose different questions that fit this view," says Orben.
Whether they recognize it or not, a researcher concentrating only on specific issues decides to pursue an analytical path, excluding many, many others. How many? In the case of the MCS, the combination of questions of inquiry into well-being with questions about television, winning video games, and the use of social media gives rise to a total of 603,979,752 analytical approaches that a researcher can take. Combine them with questions addressed to the supporters of the study participants, and this number rises to 2.5 trillion.
Granted, the vast majority of these 2.5 trillion results is not particularly interesting. However, the expansive nature of these datasets gives rise to associations that are technically statistically significant but very, very small. In science, large sample sizes are generally considered a good thing. However, if you combine the multitude of analytic paths that provide subjective survey questions with a huge number of survey participants, the potential for statistical skull damnation like p-hacking – the practice of catching cheap data with a large amount of data opens up.  "Researchers will essentially torture the data until they have a statistically significant result that they can publish," says Przybylski. (Not all researchers who report such findings do so with the intention of deceiving, but researchers are human beings, and science as an institution can aspire to objectivity, yet scientists are prone to prejudice that can blind them to data misuse. ) "We wanted to get past this kind of statistical cherry picking, so we opted for a data-driven method to collect the entire orchard at once."
He and Orben found this method in a statistical tool called Specification Curve Analysis. Instead of investigating a single analytical pathway through the Millennium Cohort Study, SCA allowed them to study 20,000 of them. It also allowed them to examine all 41,338 paths in two other large datasets, the so-called "Monitoring the Future" and "Youth Risk and Behavior Survey," commonly used to assess the connection between digital habits and the well-being of adolescents ,
The result was a series of visualizations depicting the wide range of possible effects researchers were able to detect in the three repositories, and they reveal some important things: First, small changes in the analytical approach can lead to drastically different results within this spectrum to lead. Second, that the connection between technology use and well-being is negative. And third, this correlation is very, very small and accounts for no more than 0.4 percent of the variation in well-being of adolescents.
To put it in perspective, researchers compared the relationship between the use of technology and the well-being of adolescents from other factors that were examined by the large datasets. "The use of technology is about as good as the well-being of eating potatoes," says Przybylski. In other words, hardly ever. By the same logic, bullying had a four times larger effect size than screen usage. To smoke cigarettes? 18 times, conversely, sufficient sleep and breakfast was associated with the well-being of teens in the order of 44 and 30 times, respectively, compared with the use of technology.
In other words, the impact of technology on wellbeing can be statistically significant, but its impact The practical meaning – according to existing records – seems negligible. "The degree of association documented in this study does not match the level of panic we see in terms of screen time," says Irvine psychologist Candice Odgers of the University of California, who studies how children's development unfolds the development of children. "It really shows the discrepancy between public speaking and what most of the data shows."
What the study does not do is close the book on technology effects. Instead, attention is drawn to the need for more differentiated questions. Not all screen time is the same, but most recent studies treat it as monolithic. "It's like asking if the food is good or bad for you, and those questions will never help us in the end," says Orben. "We need to stop the debate on the impact of generic technology on well-being and the open space for more and better research on the kind of technologies people use, who uses them, and how they do it."
More Great WIRED Stories