Gov. Phil Murphy (D-N.J.) was reelected as the governor of New Jersey after defeating Republican challenger Jack Ciattarelli in a close race with a lead of approximately 85,000 votes. The closeness of the election caused people to question the accuracy of state polls like the ones conducted by the Rutgers Eagleton Institute of Politics that showed the governor winning by a greater margin.
Ashley Koning, director of the Eagleton Center for Public Interest Polling (ECPIP), said polling can provide details about voters’ behaviors, perceptions and attitudes at a certain point in time but should not be considered a completely accurate predictor for elections.
“The media put too much emphasis on poll numbers as if they are set in stone and a predictor of what will actually happen,” she said. “At best, they are estimates with a range of possibilities based on statistics and sampling.”
Kyle Morgan, research associate at ECPIP and assistant professor in the Department of Political Science at Francis Marion University, said the center sent out two random statewide phone surveys in May and October to assess the closeness of the election.
The May survey showed Murphy holding an early lead over Ciattarelli, but this projected lead narrowed by the time of the October survey, he said.
“The early lead we saw in May was seen by other pollsters in the state, which may have fed into the narrative that emerged in over the summer and early fall that the race was going to be more of a ‘blow out’ than it turned out to be,” Morgan said.
Koning said there has been a decline in the public trust in polling since 2016, which can be attributed to major polling errors in the 2016 and 2020 elections where former President Donald J. Trump was a leading candidate. She said pollsters will face challenges when trying to regain the public’s confidence because this distrust reduces individuals’ likelihood of responding to polls.
Morgan said this decline in response rates prevents pollsters from finding a representative sample of a population since some parts of the electorate are not included in the data. He said selection bias, or when some certain groups are more likely to respond to surveys than others, can also cause further problems with accurate polling.
In addition, Morgan said pollsters can only test the accuracy of new methods every few years when there is an election, which gives them fewer opportunities to hone their craft. To counter these impeding factors, pollsters are experimenting with new methodologies, which can sometimes lead to “misses” when it comes to polling accuracy, he said.
“Pollsters have been hard at work figuring out how to make their results more accurate in the future, arguably no one has more at stake in that venture than we do,” Morgan said. “Not every experiment that gets tried will be a success, so this process will require pollsters to engage in a bit of trial and error to see what works and what doesn’t.”
While pollsters have produced some errors in recent years, Koning said, they have also had success in predicting elections such as the 2018 midterms and the 2021 Virginia elections. She said pollsters should work on demonstrating to the public the probabilistic nature of their work and factors like margin of error.
Morgan said pollsters would do well not to emphasize projected leads for candidates in elections since if their comments prove inaccurate, it could harm the public’s perception of polling in general. He also said the public can play their own part in helping improve polling accuracy by participating in surveys and providing more data for pollsters to analyze.
“If the public wants more accurate polling data we also need more people to participate in and answer polls when asked to do so,” he said. “If everyone that criticized polling between 2016 and now, committed to answering polling calls when they get contacted and encouraged one family member or friend to do the same, I think we would be moving towards getting more accurate results.”