Research

Methodology

The Institute employs a range of scientific methods to gather its research, maintaining strong relationships with the social cohesion sector and other research bodies to deliver balanced, accurate findings.

The methodology behind our researches

The Scanlon Foundation Research Institute carries out or commissions research into social cohesion, multiculturalism and related issues in Australia. Bringing together leading thinkers and institutions, we develop and analyse research that informs practical projects and outreach in the community.

The Institute’s research draws on qualitative and quantitative techniques, including surveys and interviews. Our commitment to inclusion and engagement ensures participatory and survey research are key features of this activity.

Networked and partnership-based research activity is also a priority. This focus on collaboration is helping build and maintain a comprehensive Australian research database, and avoiding unnecessary duplication. 

Mapping Social Cohesion Research Methodology

The Scanlon Foundation Research Institute carries out or commissions research into social cohesion, multiculturalism and related issues in Australia. Bringing together leading thinkers and institutions, we develop and analyse research that informs practical projects and outreach in the community.

The Institute’s research draws on qualitative and quantitative techniques, including surveys and interviews. Our commitment to inclusion and engagement ensures participatory and survey research are key features of this activity.

Networked and partnership-based research activity is also a priority. This focus on collaboration is helping build and maintain a comprehensive Australian research database, and avoiding unnecessary duplication. 

Polling and Democracy

One of the pioneers of public opinion research in the 1930s, George Gallup, articulated the view that knowing what the public thinks is a fundamental element of a democracy. 

There was a need, as he expressed in the title of his 1940 book, to take the ‘pulse of democracy’, to provide an in-depth understanding of the public mood. He wrote: If the government is supposed to be based on the will of the people, then somebody ought to go out and find out what the will is.

Gallup believed that public opinion surveys were the best defence against political movements which claimed without evidence to speak for the majority opinion. Polls were needed, in Gallup’s view, to provide an understanding of opinion on a broad range of issues: ‘the practical value of the polls lies in the fact that they indicate the main trends of sentiment on issues about which elections often tell us nothing.’     

Polling prior to Gallup 

In the early decades of the twentieth century, numerous newspapers and magazines in the United States conducted what were termed ‘straw polls’ to provide an indication of public opinion. 

The Literary Digest, an influential American weekly publication which by 1927 reached a circulation of more than one million, won prominence with its use of such polls. In 1916 it initiated a procedure of mailing postcards to its subscribers, requesting an indication of intended vote by return mail. It managed to correctly predict the outcome of presidential elections between 1920 and 1932.

During the 1936 election campaign, postcards were sent to ten million households, eliciting 2.4 million responses.  Despite the huge number, the ‘straw poll’ spectacularly failed. Its prediction favouring Alfred Landon over President Franklin Roosevelt was incorrect by a margin of nearly 20%, with Landon gaining the equal lowest electoral college numbers in history. Such was the discrediting of the Literary Digest that it ceased publication within two years of the election.

The problem, little understood at the time, was the unrepresentative character of the sample. The postcards had been sent to subscribers of the Literary Digest and to registered owners of cars and telephones, that is, to relatively well-off segments of the population during the Great Depression, with lower income groups excluded. 

In contrast, George Gallup, using a much smaller but representative sample correctly predicted Roosevelt’s victory. Gallup commented that ‘no mere accumulation of ballots could hope to eliminate the error that sprang from a biased sample.’ 

Scientific polling

Following its success in 1936, the organisation established by Gallup, the American Institute of Public Opinion, successfully predicted election results over more than a decade. In a 1948 presentation, Gallup stated that his institute had produced 392 election forecasts with an average error of 3.9 percent. The average error of forecasts made after November 1944 was even smaller, just 2.9 percent. 

Americans came to trust what became known as Gallup Polls. But in 1948 Gallup incorrectly forecast that Thomas Dewey would defeat the incumbent President Harry Truman, a result described by Time magazine as the biggest polling blunder since the 1936 election. The editor of the Pittsburgh Post-Gazette summed up the public mood: ‘We won’t pay … attention anymore to ‘‘scientific’’ predictions,’ a refrain repeated many times since.

It is in the context of elections that the accuracy of polls is most open to scrutiny in the media. In contrast, surveys on social and political issues are often featured without consideration of reliability. Such polls pass without question because there is no perceived yardstick against which to measure accuracy, unlike an election prediction. Failure to predict the result of the 2016 Brexit referendum, the 2016 American presidential election, and the 2019 Australian election, were recent events that fuelled disenchantment with polling.

In the context of the 2019 Australian federal election, 16 published polls predicted a Labor win with a two-party margin of 51%-49% or 52%-48%.  The actual result was almost the reverse of the predicted, 48.5% for Labor, 51.5% for the Coalition.

On election night, the ABC’s resident political expert Antony Green commented on the ‘spectacular failure of opinion polling', political scientist Dr. Andy Marks observed that ‘mainstream polling has become… worthless’, while Greens leader Richard Di Natale considered that the ‘era of opinion polls… is over.’  

False expectations of accuracy 

In part, these judgements reflect a false expectation of accuracy created by the misuse of findings. Polls are used to create headlines and controversy, to sell newspapers, illustrated by the regular polling of the popularity of the Prime Minister and Leader of the Opposition. Frontpage reporting has been based on minute shifts in polls, of the order of one percentage point, which are meaningless – polls are not claimed to be accurate to a percentage point.

Margin of error

Polls are estimates of public opinion; at best with a large representative sample, their margin of error is close to +/- 2.5% at a confidence interval of 95%, which means that 19 times out of 20 the indicated result will be within that margin. Hence if the poll indicates a result of 51%, the actual result is expected to be in the range of 48.5% - 53.5%. 

In the USA – the country with the largest investment in polling – a database of political polling from 1998 to 2014 indicates that the best electoral polls miss the actual result by an average of 4.3 percentage points.   In America in 2018, the best interviewer-administered polls had an average error of 4 percentage points, online polls had an average error of 5.3 percentage points.   

If a poll picks the winner there is little attention to the detail of the predicted result. For example, there was little criticism after the 2018 Victorian election when the polls predicted a Labor victory but did not consistently point to the landslide that eventuated. At the 1936 American presidential election which launched George Gallup to national prominence, the focus was on his predicted Roosevelt victory, not the detail that his prediction was out by a margin of almost 7%.        

Recognition of the margin of error demonstrates that the problem with polling is less to do with inaccuracy, more to do with the unwarranted expectation of precision. In the context of the 2019 Australian election, the problem was the way in which results were reported. For a number of polls, the possibility of a Coalition victory was within the margin of error. Given the number of undecided voters and the minimal investment in polling of electorates and states, analysis needed to acknowledge that on the basis of the available evidence the result was too close to predict. This was the finding of at least one privately commissioned polling agency.  

Fit for purpose

A key concept in appraising polls is ‘fitness for purpose’: the degree of accuracy required. 

John Utting, the former Labor Party pollster, commented after the Australian election that the ‘quality of the samples isn’t as rigorous as it should be.’   For polling on elections, accuracy needs to be at the maximum level. For surveying community attitudes on social and political issues, there is a high demand for exactitude, but accuracy to one or two percentage points is less critical. 

Election specific challenges

There are distinctive challenges in predicting election outcomes, which require significant financial investment to achieve a high level of reliability. These difficulties were discussed in the aftermath of the 2019 Australian election by Jim Reed in the Research News journal of the Australian Market and Social Research Society. They include:

  • The risk of relying on national level polling, when the outcome of an election is determined at the electorate level. In the 2019 Australian election Labor won 56% of the two-party preferred vote in Tasmania, 53% in Victoria, 48% in New South Wales, and 41.5% in Queensland.  Optimum polling requires funding to conduct representative state samples and tracking of opinion in a few critical electorates. 
  • Attention to the order of candidates on the ballot papers of specific electorates. In closely fought electorates, candidate order may impact the result. 
  • The need to track early voters. In 2019 more than 3 million (20% of voters) lodged a pre-poll vote.
  • Tracking of undecided respondents and those uninterested in the election who may not vote or vote is informal. This is more of an issue in countries where voting is not compulsory, but even in Australia the informal vote in 2019 was 5.5% (7% in NSW); in addition, 8% of those on the electoral rolls (1.3 million people) did not vote.  
  • Timing of polls. Over the course of a campaign, many voters change their minds and it is difficult to accurately track shifts in the days before an election. 

Obtaining a representative sample 

The fundamental challenge, applicable not just to election polls but to all polling, is to obtain a representative sample. George Gallup observed that ‘the most important requirement of any sample is that it be as representative as possible of the entire group or ‘‘universe’’ from which it is taken.’ 

In the early decades of scientific polling, interviewers randomly selected addresses and visited people’s homes to conduct face-to-face interviews, or left a printed questionnaire to be filled and collected. Occasionally questionnaires were mailed with a request to return the completed survey by mail.

Telephone-based surveying

In the second phase of scientific polling, from the 1970s onwards, an increasing number of surveys were administered by interviewers over the telephone. 

However, a new challenge emerged as people became less willing to complete surveys when contacted by phone, even to answer a call from an unknown number.  In the United States, the Pew Research Centre found that response rates fell from 36% in 1997 to 6% in 2018.  Contrary to expectation, research has found no significant biasing of the sample by the declining response rate; the major problem was cost, as many more calls needed to be made for each completed survey.

Yet another problem is increasing reliance on mobile phones.  In Australia today 43% of adults do not have a landline and the proportion increases yearly.  It is possible to conduct surveys on mobile phones, either administered by an interviewer or self-administered on a smartphone, but there is no comprehensive national directory available to survey agencies that link mobile numbers to location. This greatly increases the cost of obtaining a representative sample where location is an important consideration.

Internet surveys

While the older forms of surveying have become less viable and more expensive, the internet has provided a new option and brings a number of potential benefits.  

Since 2010, online completion has been the dominant mode of data collection in the Australian commercial and social research industry. A number of commercial providers have recruited people willing to complete surveys on the internet for a small payment. It is estimated that in 2016 there were 50 ‘research panels’ operating in Australia, including the Your Source panel with over 100,000 members, and the Online Research Unit, with 350,000 members, claimed to be the largest in the country.

Panels have some advantages over interviewer-administered surveys, as discussed below, but they also have disadvantages. Each mode of survey administration has benefits, but also drawbacks.

One key challenge for online surveying is sample representativeness. If all members of a population had computer access and their computer addresses were centrally listed, as in a telephone directory, then it would be possible to conduct random samples on the internet. But there is no comprehensive listing of computer users and not all members of the population have access to a computer or are willing to complete an online survey. This deficiency of surveys conducted solely online is referred to as coverage error.

Most online panels worldwide are established via non-probability sampling; anyone who becomes aware of an invitation to join a panel can do so.  It is assumed (but necessarily cannot be established) that of all people who become aware of such an invitation, for example through an online advertisement, less than 1% join. There is no way to calculate the margin of error for a panel that is not based on a representative sample of the population. Pennay and his co-authors, in a study of Australian panels, observed that ‘although a completion rate can be calculated for within-panel surveys, this rate does not account for the ‘response rate’ when the panel was established.’

Those who decide to join a non-probability online panel are not likely to be representative of a country’s population, nor of a specific demographic segment of the population. Level of education, computer literacy and English language competence, age and social class, and region of residence, are all factors that influence participation in online panels. 

Part of the attraction is the opportunity to have views recorded, so those with strong views may be disproportionately attracted to non-probability online panels. Panel members usually also receive money for joining the panel, and for each survey they complete, so a financial consideration may influence panel membership. Those who have not completed their secondary education but choose to join an online panel are unlikely to be representative of all of those who did not complete their secondary schooling; the member of an immigrant group who elects to join a panel may not be representative of that group of immigrants.

The policy of ABC News in the United States is to avoid reporting of surveys that do not meet their standards for reliability. These are specified as: non-probability, self-selected or so-called ‘convenience’ samples, including internet opt-in, e-mail, ‘blast fax,’ call-in, street intercept and non-probability mail-in samples.  

In contrast, in the Australian media, there is little understanding of the unreliability of non-probability samples. Self-selected, opt-in polling, such as the Australian Broadcasting Service’s Vote Compass and Australia Talks, are used to generate interest and promote discussion within media audiences, seemingly on the assumption, in ways similar to the Literary Digest fiasco, that a very large sample will represent the views of ‘Australians’, not specific audiences.

Evaluation of results obtained by North American and European panels have found that non-probability samples completed via online panels are less accurate, on average, than probability samples when measured against known results; non-probability surveys produce results that are more variable from each other than probability surveys; weighting (discussed below) of online non-probability panels sometimes improves the accuracy of findings, but sometimes reduces their accuracy.    

In 2015 the Australian Social Research Centre conducted an Online Panels Benchmarking Study. In the study, the same questionnaire was administered using three probability generated samples administered by telephone and five non-probability panels.  The findings supported those of the overseas studies with regard to better accuracy and consistency of probability generated samples and the impact of weighting.

Advantages

In terms of cost, face-to-face interviewing is the most expensive mode of administration, online surveying is less expensive. Online surveys also have the advantage that they are quicker to administer, with the scope to complete a survey in a matter of days compared to telephone administration which may take more than a month, depending on sample size and the number of interviewers.  There are also potential benefits in the truthfulness of response. 

Social desirability bias

When a survey is administered by a trained interviewer, the personal interaction with the interviewee risks biasing responses.  This risk is termed ‘Social Desirability Bias’ and refers to the potential to provide responses that the interviewee believes is more socially desirable than responses that reflect a true opinion. This form of bias is of particular importance in response to questions that deal with socially sensitive or controversial issues, such as attitudes to minorities. The Pew Research Centre in the United States has commented:

The social interaction inherent in a telephone or in-person interview may exert subtle pressures on respondents that affect how they answer questions. … Respondents may feel a need to present themselves in a more positive light to an interviewer, leading to an overstatement of socially desirable behaviours and attitudes and an understatement of opinions and behaviours they fear would elicit disapproval from another person.  

A prominent American researcher, Humphrey Taylor, observes that ‘where there is a ‘socially desirable’ answer, substantially more people in our online surveys give the “socially undesirable” response. We believe that this is because online respondents give more truthful responses.’  Similarly, Roger Tourangeau and his co-authors of The Science of Web Surveys report that a review of research ‘demonstrates that survey respondents consistently underreport a broad range of socially undesirable behaviours and over report an equally broad range of socially desirable behaviours.’

An online questionnaire completed in privacy on a computer, or an anonymous printed questionnaire returned by mail, can provide conditions under which a respondent feels greater freedom to disclose honest opinions on sensitive topics. But it may also lead to exaggerated responses.

A 2010 report prepared for the American Association for Public Opinion Research (AAPOR) concluded that computer administration yields more reports of socially undesirable attitudes and behaviours than oral interviewing, but no evidence directly demonstrates that the computer reports are more accurate.

The weighting of the achieved sample

Panel providers are able to adjust results obtained through a procedure known as weighting, to lessen the non-representative character of their sample.  The adjustment may be both demographic (for example, to correctly align the proportion of men and women, or the level of education of respondents) and attitudinal, to correct for known skewing of a panel.   

An issue with the weighting of surveys conducted for the Australian media is that details of the approach may be regarded by the panel owner as a commercial asset and not revealed. In the United States, there is more requirement for transparency.

In the context of the 2019 election, there was a suspicion that some surveying companies were weighting or adjusting their results so as to produce results similar to that of their competitors, a behaviour described as ‘herding’. Professor Brian Schmidt, Nobel Prize winner and Vice-Chancellor of the Australian National University, argued that it was mathematically impossible for the polls to be consistently wrong by very similar margins. Schmidt stated that the odds of 16 polls ‘coming in with the same, small spread of answers is greater than 100,000 to 1. In other words, the polls have been manipulated, probably unintentionally, to give the same answers to each other.’   

Questionnaire design and mode of administration

There are a number of additional factors that can impact the accuracy of polls. These factors include question-wording, question order, the number and order of response options, and additional factors to those already discussed with regard to the mode of administration. The Pew Research Centre has commented:

The choice of words and phrases in a question is critical in expressing the meaning and intent of the question to the respondent and ensuring that all respondents interpret the question the same way. Even small wording differences can substantially affect the answers people provide.   

Research indicates that in online and other forms of self-completion polls, a respondent is more likely to select the best ‘first’ response they see – known as the ‘primacy’ effect.   On the other hand, in response to an interviewer, a higher proportion of respondents select the best ‘last’ mentioned response option, known as a ‘recency’ effect. 

The conversion of questions from spoken to written form leads to the provision of visual cues that can play a significant role in determining response. A key problem, discussed below, arises from the placement (or non-placement) of mid-point and ‘don’t know’ response options.

The Scanlon Foundation surveys

The Scanlon Foundation sets the benchmark for quality social cohesion surveying in Australia. 

Its national survey was first conducted in 2007 and has been conducted annually since 2009. The survey reaches a large sample of 1500 respondents, employs a detailed questionnaire of some 65 substantive and 25 demographic questions, with consistent question wording and question order to enable precise measurement of the trend of opinion over time.  It is administered by telephone and since 2013 has employed a dual-frame methodology, both landline, and mobile phones, to ensure that it achieves a representative sample. Mobile phone coverage is particularly important to ensure an adequate number of younger respondents. 

The response rate obtained by the Scanlon Foundation has been consistently high: for the 2019 survey, the overall co-operation rate (interviews/ interviews + refusals) was 29.8% - 27.1% landline, 31.6% mobile. 

As part of the measures taken to maximise response, after the sample is drawn letters explaining the objectives of the survey are sent to potential respondents. Potential respondents are informed that the survey is being conducted by university researchers, not a marketing agency, and is overseen by the Monash University Ethics Committee.  

In addition to telephone surveying, the Scanlon Foundation has experimented with online surveys which have provided insight into the strengths and limitations of the online methodology. In 2018 and 2019 the Scanlon Foundation survey was administered both by telephone and on the Social Research Centre’s Life in Australia™ panel, in recognition that with both advantages and disadvantages considered, the future of quality national surveying will require administration via the internet on a probability-based panel. The Scanlon Foundation’s 2019 social cohesion report presents findings both for the telephone administered and self-administered version of the survey. 

This rigorous approach to surveying makes possible precise measurement of both consistency and change in Australian opinion.  

References

[1] Gallup, George, and Saul F. Rae. The Pulse of Democracy: The Public Opinion Poll and How it Works. New York: Simon & Shuster, 1940

[2] Gallup International, Polling Around the World 70, p86, http://www.gallup-international.com/wp-content/uploads/2017/10/Gallup_English_Book_2017.pdf

[3] Nate Silver, ‘How FiveThirtyEight calculates pollster ratings’, 25 Sept. 2014, FiveThirtyEight

[4] Nate Cohn, ‘No one picks up the phone, but which online polls are the answer?’, New York Times, 2 July 2019

[5] Rob Harris, ‘Labor failed to head warnings that election was on knife edge, says secret report,’ The Age, 11 Nov. 2019

[6] John Utting, ‘“False narrative” from polling may have ended Malcolm Turnbull,’ ABC News, 23 May 2019

[7] Jim Reed, ‘Margin for error: 2019 election polling,’ Research News, Aug.-Oct. 2019, pp. 14-18

[8] D.W. Pennay et al., ‘The Online Panels Benchmarking Study: A Total Survey Error comparison of findings from probability based surveys and nonprobability online panel surveys in Australia’, CSRM & SRC Methods Paper, 2/2018, Centre for Social Research and Methods, Australian National University, p. 32

[9] Gary Langer, Langer Research Associates, ABC News’ Polling Methodology and Standards, 23 July 2015

[10] Pennay et al.

[11] Pew Research Centre, Mode effects as a source of error in political surveys, 31 March 2017, https://www.pewresearch.org/methods/2017/03/31/appendix-b-mode-effects-as-a-source-of-error-in-political-surveys/

[12] Guardian, 20 May 2019; Canberra Times, 20 May 2019

[13] Pew Research Centre Methods papers, Questionnaire design

[14] R. Tourangeau et al., The Science of Web Surveys, Oxford University Press, 2013, pp. 8, 146, 147, 150