On November 25, the Canadian Association of Journalists (CAJ) released the results of their first “Canadian Newsroom Diversity Survey.”
The survey was launched exactly a year earlier, and the CAJ writes that it was the result of “three years of consultations with survey design experts, international partners, and Canadian organizations and individuals in Canada that have studied the diversity of our country’s media ecosystem.”
The CAJ should be congratulated for this accomplishment, which could be the start of a new chapter in diversity survey efforts in Canadian media. However, the CAJ’s survey also deserves close scrutiny, particularly because of how important it could become. Thus far, I have come across much celebration, but little written critique outside of Twitter. I intend for this article to point out some of what’s good about the survey, and what could be improved.
But first, I need to disclose that I was a research assistant on a successful 2020 application led by Asmaa Malik and Sonya Fatah to get SSHRC funding for a future newsroom diversity survey that will be based on self-reported data. As part of my work with Malik and Fatah, I was on a couple calls with CAJ members, who were interested in our opinions on what a diversity survey could look like.
My involvement with Malik and Fatah’s project ended in June 2020 when I became a full time employee at Passage, and the following opinions are mine alone, written for the sole purpose of hopefully helping to improve future diversity surveys of Canadian news media.
With that out of the way, let’s begin with what the CAJ survey did well.
What Is Done Well
The CAJ’s main success is not that the survey has been released, but that they will continue to release them annually.
The CAJ is by no means the first group to release a diversity survey of Canadian newsrooms. In my research, I found at least seven other surveys that offered quantitative data on Canadian newsrooms, released between 1994 and 2017. The problem is that each of these surveys used a different methodology, and focused on varying sorts of journalists or newsrooms. As such, there’s no way to directly compare the numbers, making any analysis of whether newsrooms got more representative over the years less useful.
Diversity surveys need to be done annually, or at least consistently, to help track how individual newsrooms and the industry as a whole change, and hold their leaders accountable to the goals they set. So, the CAJ’s plans to do exactly that, with presumably a relatively similar methodology going forward, represents a major breakthrough in Canadian media diversity surveys. This is worthy of applause.
Next, the CAJ’s survey included data on more than 3,800 journalists for one of the questions it asked.
A 1994 diversity survey included more than 2,600 journalists, while one in 2006 covered more than 2,100. The other five I previously mentioned included far fewer responses. This makes, as they note, the CAJ’s project “the most filled out diversity survey in Canadian media history.”
The CAJ also included responses from 209 outlets, whereas the 1994 study, for example, had just 41. This means the CAJ’s survey includes a broader range of media outlets (as their list of all participants makes clear) including independent ones, which make up an increasingly important part of the landscape.
This shouldn’t be read as an attack on past studies though, as part of the narrowness of the responses are due to newsroom managers themselves. For example, the 1994 survey had a roughly 50 per cent response rate. The CAJ’s was slightly lower, at about 40 per cent. The CAJ does note, however, that they’ve “received commitments from some of [the largest Canadian newsrooms that didn’t participate this year] to fill out this survey next year,” which is a good sign.
The wide response to the CAJ’s survey is a positive overall (more on that later, though, as it has drawbacks, and is somewhat misleading.)
The CAJ’s data was also broken down into four categories: full-time supervisors, full-time non-supervisors, part-time workers and interns.
This allowed them to provide data to help answer a relatively common question: Do diversity efforts end up only leading to more racialized part-time workers and interns being hired, but little change in more senior positions? Their data finds that this is the case, and it being broken down this way is useful for a more nuanced analysis of diversity efforts going forward.
What Can Be Improved
The survey’s major problem is not any particular question (though there are issues with some of their details, which I’ll get back to), but rather who the questions are directed to.
The CAJ notes that their survey was sent only to the “editor-in-chief, or their equivalent, at radio, television, digital, and print outlets across the country.” They explain that, “In order to maximize participation, the survey asks for information that most newsrooms already collect, or that they typically collect, during hiring,” which these figures can get access to.
The idea is that this method will get the CAJ more data, because it requires less work from the managers and higher ups that can decide whether to pass it on to them than a different model would. It also happens to require less work on the CAJ’s part as well (which is not to say little work, as I’m sure the whole project took a great deal of effort and coordination that many other groups wouldn’t have been able to pull off.)
As I noted, this diversity survey had data on a larger number of journalists than any other in Canada since 1994, but likely ever. So, the CAJ could say their method was a success. But I want to point out a few issues with it.
To start, there’s no guarantee that this model will “maximize participation.”
The CAJ notes that they “emulate[d] much of their methodology” from the News Leaders Association (formerly known as the American Society of News Editors) in the United States. As of 2019, that survey had been running annually since 1978, which certainly means there’s something valuable there. At the same time, participation in the survey has plummeted over the years, to just 17 per cent for the 2018 survey, a record low. The group didn’t run a survey in 2020 due partially to this lack of participation. Their ongoing 2021 survey, meanwhile, had its deadline pushed back in October because of an abysmal participation rate of 4 per cent.
Can a survey based on this model really hope to maximize participation rates?
Even if the survey does lead to increased participation over the next few years, which is uncertain, it’s almost guaranteed to come up with the minimum amount of data.
This is because it limits the data to what newsrooms already have, which in many cases isn’t a lot. As a result, the data offered is just about gender and race. As many people pointed out, and the CAJ admits, this leaves out a lot of other factors. What about disability? Class? Religion? Sexuality? Education? Languages spoken? Birth place? The examples go on.
And even on race and gender, participation was hampered because it relied on what newsrooms already had. As the CAJ report notes, “Race data for nearly a quarter of the journalists included in this survey are marked as Unknown by their newsroom managers.” They add, “These newsrooms informed the CAJ that high tallies under the Unknown race category are due to individuals who chose not to self-identify on internal staff surveys.” As a result, they note that “while most national analysis in this report is based on the 3,873 journalists, the race statistics are based on the 2,908 journalists that we have race data for.”
This makes the data far less useful, which the CAJ admits. It also means that their promotion of the 3,800+ journalist participation number is a bit disingenuous, as it applies to just one of two data points (which is not, as they claim, “most” of the data.)
Sure, you could say the lack of data appears to be due to individual choices, and so maybe it’s inevitable. But this assumes that journalists who didn’t respond to a diversity survey when hired would make the same choice now, or would have even made the same choice then had they been asked by someone other than their employer. It’s not a safe assumption.
Also, although the CAJ said they compiled data newsrooms already had, including data “sourced from optional surveys conducted internally by newsrooms,” they also admitted they decided not to take all of it. They note that managers told them they are collecting “data on LGBTQ2+, disability, language, faith, and level of education.” They don’t explain why they chose not to include any of said data in their analysis. Of course, it could be that there just wasn’t enough of it, but that at least merits an explanation of what the minimum amount of data would be for it to be included.
The issue is not restricted to a lack of quantitative data, however, as qualitative feedback is also severely limited.
Because the CAJ solely interacted with managers or others in similar positions, we only hear what they had to say, with answers prompted by just four questions. The questions and answers are valuable, but it would be more useful to hear from both workers and managers, or even just workers. This is partly because, as CAJ’s data shows, managers are more disproportionately white (81.9 per cent) than anyone else in newsrooms (76 per cent for full-time non-supervisors, 66.4 per cent for part-time workers and 52.7 per cent for interns.)
We need to hear from the people being burdened by the lack of proper demographic representation in newsrooms, who are more likely to be workers. This could also be a venue for workers to express feelings that they can’t in the newsroom for a variety of well-established reasons, including job security.
How It Can Be Improved
After the survey was released, I noticed many on Twitter asked the CAJ why their survey didn’t include the demographic factors I previously mentioned, such as disability. These concerns were also raised a year earlier when the survey was announced.
Almost without fail, someone speaking on behalf of the CAJ would reply and explain that it’s because they relied on data newsrooms already had, and, sometimes, that they were exploring different options going forward. Their FAQ page for the report addresses this question directly, writing, “All of these are important lines of inquiry and ones which the CAJ would like to poll newsrooms on in future years for subsequent iterations of the survey. With the goal of maximizing participation, the survey asks newsroom leaders for information that they have already collected or that they typically collect during hiring.”
This feels like somewhat of a cop-out. Sure, newsrooms should be collecting more data. But if you’re putting together what you want to be a comprehensive diversity survey, it’s your job to work around the fact that they don’t. And how you do that is not exactly a mystery, as it has been done elsewhere for years.
Of course, if the CAJ has “spent the past three years carefully studying every diversity survey ever conducted in Canada, as well as many international examples,” as they claim, they probably already know what the way forward is. And if they don’t, I wonder what another year of “exploring” will do to get them there? Not that it’s necessarily going to happen anytime soon, as they write, “The CAJ’s goal is to increase both the number of newsrooms and the number of journalists who participate in next year’s survey.” So, more of the same, for now.
Here’s what another well-established model could look like, and how it helps solve many of the problems outlined above.
A self-reported diversity survey would focus on workers, not managers. Instead of asking only managers what they think, and getting them to pass along data they collected, the self-reported survey would connect with workers directly. This model is already in place for various industries, including newsrooms outside North America. Like so many other examples in Canada, we miss out on the best by only looking to equal the U.S.
The way this sort of survey works is the organization partners with a data firm that creates secure surveys that can be completed online, with results stored securely and privately for later analysis. This could be done either by reaching out to journalists directly (which would have the benefit of including freelancers), or by convincing managers to include their newsrooms. Even this latter option would give more power to participants than the CAJ’s model does.
Each journalist would get a unique link, and fill the survey out on their own. Because it’s self-reported, the survey can (legally) ask a ton of questions that employers don’t. They can also ask journalists whatever sort of qualitative questions they’d like, meaning we’d get to hear what they think. The results of the data compiled could then be released annually.
This method also easily overcomes one of the objections the CAJ cites for not including demographic factors outside race and gender. They claim that, “To collect data on LGBTQ+ identity, for example, managers would need to ask employees how they identify, potentially putting staff in the difficult position of having to come out to their manager.” That’s only true of their current model, and wouldn’t be the case for a self-reported diversity survey. The CAJ also seems to overlook that people may not want to disclose their gender or race to managers for the same reasons. The self-reported model could help with that as well, as noted.
There are many examples of some form of this happening in journalism alone. One such example is the 2016 “Journalism In The UK” study released by the University of Oxford’s Reuters Institute for the Study of Journalism. Here are just some of the many factors the survey was able to compile data on due to its structure: race; gender; age; income; years of professional experience; level of education; religious affiliation; degree of religiosity; political affiliation; newsroom position. The survey also includes a ton of qualitative questions that go beyond demographics, and into what journalists believe about their industry, their roles in society, etc. This survey could easily be expanded to include all of the factors I mentioned earlier.
There are a couple potential drawbacks to this method. First, the survey just described included responses from 700 journalists, far fewer than the CAJ’s did. This could happen to the CAJ as well if they pursued this option, though it’s not guaranteed. Second, if the CAJ pursued this method next year they wouldn’t be able to make direct comparisons to this year’s data.
However, following this path would allow for more data on a range of demographic factors to be collected, and it would be better to lose one year of comparisons in order to enable many years of comparisons of more useful data in the future. Moreover, much of the data the CAJ did get was already based on inconsistent internal surveys, and in some cases, such as with the Toronto Star, this resulted in unusable data. My suggested method, meanwhile, would mean everyone is getting asked the same questions, in the same way, leading to more useful data now and for comparisons in the future.
As a whole, I believe some version of a self-reported survey would be the best way to help fix the issues with the CAJ’s model. The CAJ is also in a great position to implement such a survey, as they already have contacts with these newsrooms, and appear to have acquired some trust on the matter. For that reason, I hope they will take up this model in the future.
If they don’t, however, I still have a couple suggestions for next year’s survey to help improve the one they already sent out.
The survey’s “Middle Eastern” category includes “Afghani,” which is wrong for two reasons: Afghanistan is not considered to be in the Middle East, and Afghani is a currency, whereas Afghan is the correct term to use for the people. The CAJ should correct this mistake next year by writing “Afghan” instead of “Afghani,” as even StatCan does, and either rename their “Middle East” category or, ideally, add a new one so that Afghanistan and other Central Asian countries aren’t miscategorized.
The report also combines the following four groups into one category, labelled as “Asian”: “Asian Caribbean”; “East Asian”; “South Asian”; “Southeast Asian.” These groups make up more than half of the world’s population, and combining them all into one category does a disservice, even in the Canadian context where “East Asian” and “South Asian” represent the largest racialized groups in the country. Next year, they should be split up to a greater degree than they are now. If they can’t be due to data limitations, this gives all the more reason to pursue a self-reported survey for more nuanced analysis.
As I mentioned earlier, I was involved in calls with the CAJ years ago where I expressed my thoughts on diversity surveys. I also helped get funding for a future survey that is more aligned with the vision I’ve described here. With this in mind, I hope my criticism will not be written off as coming from someone content to critique but not create (though criticism is valuable regardless). At this point, criticism is all I can give, and it’s done for the purpose of improvement.
I look forward to hearing updates from CAJ on next year’s diversity survey, and how they plan to respond to the criticism they’ve received of their valuable efforts.