top of page
Search
Brett Thombs

When method speaks volumes: What COVID-19 has shown about mental health research


A Q&A with Dr Brett Thombs


As a clinical psychologist and methodologist, Dr Brett Thombs holds a number of titles. At McGill University, he is a Professor in the Faculty of Medicine, and at the Jewish General Hospital he is a Senior Investigator at the Lady Davis Institute for Medical Research. He also holds a Tier 1 Canada Research Chair. In addition to chairing the Canadian Task Force on Preventive Health Care, which he did until this month, he founded SPIN, the Scleroderma Patient-centered Intervention Network. Finally, Dr Thombs co-leads DEPRESSD, an international project that assesses the accuracy of depression screening tools.


Thombs’ DEPRESSD team has been collecting and harmonising individual participant data from 350 datasets in over 50 countries since 2012. When the pandemic hit, they took on the added task of conducting a living systematic review to understand the impact of COVID-19 on mental health. This monumental task was made no easier by the unprecedented volume of research that has been produced since March 2020. Part of the challenge, says Thombs, is sifting through it all to find meaningful studies, which are few and far between in a time when the bar has been set low for publishing on mental health.


Their work has led to interesting findings, namely the unexpected discovery that mental health symptoms, on the whole, did not change much during the pandemic compared to prior to the pandemic. Thombs spoke with COVID-MINDS about how other studies compare and how research can be improved moving forward.


1. When you began your living systematic review during COVID, what questions were you asking?


Thombs: We were looking at two main things. What was happening to mental health symptoms during COVID-19? And if we were going to have mass demand for mental health services, were there ways to deliver interventions with already proven components to more people? We wanted to see what interventions were scalable for a crisis situation.


2. What have been the main challenges of the living systematic review?


Thombs: It’s been tricky to keep it going because of the workload. So much research is coming out at such a fast rate. Most systematic reviews -- you find your evidence, you fold up your tent and you might come back to it sometime in the future. Whereas living systematic reviews have to be kept going and continually funded. But they are really good for when evidence is changing and for tracking changes in results and conclusions. But the pace of this one is unprecedented. Normally you go through a few citations every month, and every year you might include a trial or two and you keep it rolling. But with this one, evidence is being produced so fast. We initially set it up as daily but we shifted to every week. It’s probably one of the fastest living systematic reviews ever. We’ve gone through over 60,000 citations, including many from Chinese language databases. It’s been a real challenge to peddle fast enough to keep up with the pace.


3. Was anything immediately apparent?


Thombs: Right away we were really concerned about the quality of studies. We wanted to really focus on studies with good principles, such as using pre-COVID data rather adopting cross-sectional methods. And we were finding results that were all over the map. In the trials we were seeing effect sizes of 7, 8, 10 standard deviations, which is beyond plausible. Normally, treating depression typically reduces symptoms by a third to half of a standard deviation. Treating anxiety gets larger effects, closer to a standard deviation, but nothing like what we were seeing in lots of these trials. So we actually started a process where we tried to verify these with authors. We’ve only been able to verify a small handful. Many of the studies don’t even have contact information. Some trials didn’t have any institutions listed. Only a handful have been registered. It has really been quite a puzzle. It’s been very tricky handling that because usually in a systematic review environment you don't want to exclude any evidence, but we’ve been excluding to appendices many of these studies.


"If one thinks about it, to publish a paper in mental health all you need is a computer, a questionnaire and a patient population. These types of studies are coming out in massive amounts during COVID-19... you have to sort through piles of studies that are not helpful or have potentially misleading results."

4. Of the 60,000 citations, how many do you estimate were rigorous and helpful?


Thombs: Currently in our work on symptom cohorts, we’ve included about 70 studies. So it’s only a handful that are usable. The UK in particular has wonderful cohorts with mental health evidence. We wanted either population-wide cohorts or good cohorts that focus on specific vulnerable populations previous to COVID-19 and during the pandemic. The trials have been more challenging. I think there are fewer than 10 that are useful in any way. We have found 3 trials that we thought were high-quality trials of innovative, scalable interventions – and to disclose, one of them was from our SPIN Team. But most of the others are difficult to use to draw any conclusions, particularly the 50+ done with people with COVID-19.


5. What do you think is leading to this high volume of low-quality studies on mental health?


Thombs: Unfortunately, we have a bit of a problem in our field. Really good mental health epidemiology is hard to do. We have wonderful epidemiologists out there doing great work, but it is often hard to find amid the huge volume of very poor-quality studies. If one thinks about it, to publish a paper in mental health all you need is a computer, a questionnaire and a patient population. These types of studies are coming out in massive amounts during COVID-19. While there are impressive studies done in difficult conditions by really skilled psychiatric or mental health epidemiologists, you have to sort through piles of studies that are not helpful or have potentially misleading results.


I really do want to emphasise that there are tremendously good mental health researchers in our field. But there is too much of the computer-questionnaire-patient study model with little thought given to what it means to get a certain score on a questionnaire. Mental health symptom questionnaires are useful for a number of clinical purposes, but people use them to report on prevalence. These questionnaires, however, are absolutely untethered to prevalence; they have nothing to do with it. Unfortunately, however, almost all the studies we are seeing are cross-sectional and reporting these kind of things. We’ve even seen evidence syntheses where people combine different questionnaires with different score thresholds and report possible prevalence ranges that cover most of the spectrum, which doesn’t mean anything. This has been happening in our field for a number of years but is much worse during COVID-19.

"Hopefully with this scrutiny we’ll get more funding for good longitudinal methods and reasonable comparisons. We really need ongoing mental health surveillance cohorts. Let’s clean up our house and do better mental health research because we need it!"

6. How do you think this can be solved?


Thombs: I’m hoping results from strong studies will help our field wake up and say that we shouldn’t be doing this kind of work during COVID-19 and we shouldn’t do it outside of COVID. Although we’re still updating our synthesis, it seems that the longitudinal cohort studies are showing us that at the beginning of the pandemic there was potentially a small blip of mental health change but then things stabilised. Even when we put all of these studies together, we find some small upward rise in symptoms in the first couple of months of COVID-19 and then it comes back down. These results differ so strikingly from studies that conclude that there have been significant changes but are just based on symptoms measured cross-sectionally! Hopefully with this scrutiny we’ll get more funding for good longitudinal methods and reasonable comparisons. We really need ongoing mental health surveillance cohorts. Let’s clean up our house and do better mental health research because we need it!


7. Do you think this is a controversial opinion?


Thombs: I think the typical conversation – about the vast prevalence of mental health conditions – is starting to change. Some members of the Mental Health & Wellbeing Task Force of the Lancet COVID-19 Commission recently wrote about this in an article in The Atlantic. I agree with their comments; they said that many of us researchers had predicted a mental health catastrophe with COVID-19 but we were wrong. Mental health symptoms overall haven’t changed that much. We’ve all been surprised but we need to follow the data. For example, suicide data has actually been stable or down in almost all countries since COVID started. It’s possible that governments haven’t gotten all the data published, but of the evidence available, most countries show that suicide rates either remained stable from before the pandemic or even went down. A great Norwegian study assessed disorders using a randomly selected sample and rates haven’t increased either. These corroborate what we’re doing. So I don’t think it’s controversial at this point among people doing careful work because it’s becoming pretty evident. It will, though, be surprising for some others.


8. Given that mental health has been a popular topic during the pandemic, do you think the public and policymakers are aware of this research?


Thombs: I want to clarify that we’re looking at aggregate data -- we're not seeing a big net change. I suspect that the pandemic has been very different for different people. Some people have had their lifestyles change for the better. On the other hand, some people have been devastated by this. So I suspect there are many people who weren't struggling with mental health before COVID-19 but are now. So we have to keep that in mind that we're seeing no net change.


Also we need to differentiate what we mean by ‘mental health.’ All of us have struggled in different ways during the pandemic. But there is a difference between struggling with tough life circumstances and having a mental health condition, which means, essentially you’re not able to cope or are severely limited in your ability to cope. That is very different. What’s happened is that researchers are asking simple questions like, “During the pandemic have you experienced frustration?” and they’re getting high rates. But this is different from depression or other mental health conditions. There was one study that both asked people such questions but also included the PHQ-9 and an anxiety measure and they found a discrepancy between the former questions and the measured responses. They got really high rates in the questions about COVID-19 effects, cross-sectionally, but the depression and anxiety symptoms had not changed from pre-COVID-19. So this is an important takeaway too. We need to separate out normal levels of struggle with life circumstances and mental health conditions that mean you are not able to cope with day-to-day challenges and life satisfactorily.


My interactions with policy-makers suggest that many are likely still working with the assumption that there has been a mental health disaster – a “tsunami” of a mental health crisis, as it’s been called in the press. I think, though, with good leadership from knowledgeable people in the field, they will figure this out. At the same time, they shouldn’t be discouraged from working to make mental health supports available. Many countries have worked to increase access during the pandemic, and that was really needed; it may even have helped to dampen the negative mental health implications of the pandemic. So, we need to make sure that the focus on mental health service accessibility is maintained. Furthermore we need to recognise that there may be vulnerable populations for whom disparities in mental health burden and service access have worsened during the pandemic, and we need to work to identify and reduce disparities.


Improving access to mental health resources has been a really positive change that I hope will continue post-pandemic... At the same time, we need to be a little bit careful... we also need to cut out the waste from our field. Over medicalisation isn't helpful for anyone.

9. Have you seen any positive changes in mental health research during the pandemic?


Thombs: Improving access to mental health resources has been a really positive change that I hope will continue post-pandemic. Access is a crisis issue in many places, including where I live in Quebec, Canada, even in normal times. Destigmatising mental health conversations has been another really positive change. At the same time, we need to be a little bit careful. Other fields have gone through this. For example, I remember a breast cancer poster from the ‘70s that said if you don't have your breasts examined you need your mind examined. It used to be ‘the more the better’ for all types of screening interventions – or, really, almost any kind of medicine. We now understand better that there are positive and negative aspects to all interventions and health programmes. In breast cancer screening, women need to have the right information to make the best decision, and getting screened may be the best decision for some but not for others. In many areas, there is greater consciousness about the perils of too much medicine, and BMJ, for example, has taken a leadership role in this. There is also a risk of too much medicine in mental health. As it is, we don’t have enough treatment access for people with acute and serious mental health problems, while at the same time we also may be over treating in some cases. We’re under resourced but we also need to cut out the waste from our field. Over-medicalisation isn't helpful for anyone. So, we need to maintain better access but also get better at making sure we use it for the people who need it most.

Comments


bottom of page