Wednesday 13 December 2017

It's Bennelong Time Comming

Tomorrow is the Bennelong By-Election, which promises to be much closer than Barnaby Joyce's run in New England a few weeks back. For one thing, the Liberal (former) incumbent is up against Labor's super-candidate Kristina Keneally. For another, the expected Liberal victory has been eaten away by numerous scandals and faux pas.

In 2016 John Alexander won (just) on the primary vote alone at 50.4%. Latest polling has his primary vote now at 41.3%. When asked to allocate preferences, the two-party-preferred result was a Liberal win of 53:47.

This leaves very little number crunching for the rest of us to do, but I will note that a lot has been made of the influence Bennelong's Chinese community will have in swinging this vote. It is possible that this community will have a different preference flow ratio to the general public which may be underrepresented in the polling. But, equally, if ReachTel have done their job, this might be over-represented too.

All that can reasonably be done is to sit and wait, while expecting a narrow Liberal victory and the Government to retain its one-seat majority.

It's Bennelong Time Comming (Results)

(Backdated from 19/12)

Polling had indicated a Liberal victory with 41.3% of the primary vote and 53% on a two-party preferred count. There was no real surprise in the results; Liberal won with 45.1% of the primary vote, a little outside the margin of error on the polling, and 55.0% on a two-party preferred count.

This was a 2PP swing to Labor of around 4.7% and, although less than any polling had indicated, was hailed as a significant boon for the Labor party going forward.

Thursday 7 December 2017

Take the spin, now you're in with the techno set...

The New England by election went more or less as predicted with Barnaby Joyce receiving over 65% of the primary vote, and we have 9 days until the Bennelong by election.

In between, I'd like to take a slightly irregular look at another, off-brand event(?) in progress(?) in the same spirit as previous examinations of non-political(?) votes.

#TankTheRewind

Basically, I have been spending way too many hours recently binge-watching a gigaton of YouTube videos, and yesterday saw the release of YouTube rewind, an annual, borderline-epilepsy-inducing montage of prominent YouTubers, this year with fidget spinners and a rendition of Despacito.

The thing is, this year has been a pretty bad for YouTube from the perspective of many creators and viewers, from the 'adpocalypse' where videos were demon(it)ised for being 'controversial' (sometimes by simply containing LGBTIQA+ material, covering politics etc.) to technical issues to flagrantly inappropriate content sneaking onto the site's supposedly child-friendly platform YouTube Kids.

Throughout it all, rightly or wrongly, YouTube management has been viewed as acting unprofessionally and yielding too much power to advertisers. Although, with the exception of the limited success of YouTube Red, advertisers pretty well carry the YouTube platform many appear to seem merit in the criticisms of the website's management. In retaliation, YouTuber EmpLemon proposed a deliberate effort to embarrass YouTube with poor ratings on their YouTube rewind video to counter what he sees as a publicity-driven, reactionary mindset where the YouTube donkey kicks the creators behind it in a knee-jerk reaction. His thorough list of reasons and call to arms can be found here (strong language warning), spawning the hashtag #TankTheRewind.

Although that video has only 36 thousand views at the time of writing (I did say I watched a gigaton of videos before that came up), other some other YouTubers have spread the idea to their fanbases. Responses to the idea on reddit and 4chan have generally been positive but very scarce. Steam has been less on-board.

After the internet-wide movement to stand up to the US FCC's attempts at removing net neutrality in the United States and the band-wagoning that made EA's justification of certain features in their game StarWars: Battlefront II the least popular reddit post in history, the response thus far must be underwhelming for the #TankTheRewind supporters. On the other hand, these recent movements have shown that the internet can mobilise when outraged, so perhaps time will tell a different story.

After roughly one day, the rewind video has received 26,382,332 million views. This seems like a fair sample-size, but given the infancy of #TankTheRewind may be unrepresentative. At present, there are 1,184,149 thumbs-up and 502,380 thumbs down, roughly a 2:1 split, which isn't great for an uncontroversial video. (By the way, accurate vote totals are available by hovering over the blue:grey ratio bar beneath the thumbs). Then again, a corporate entity producing a video while trying to join several long-since-abandoned trends may get a lot of down-votes for being #cringe. (Yeah, I'm #downwiththekids, #hip, #relatable, #psephologyiscool).

I'll update this with more data over the next few days, but in the meantime here is how the vote on 2017's YouTube rewind compares with previous years':


Year Views Up Down Approval
2017* 26,382,332 1,184,149 502,380 70.2%
2016 205,690,647 3,194,362 498,594 86.5%
2015 130,062,316 2,419,080 180,075 93.1%
2014 121,351,480 1,351,662 68,273 95.2%
2013 124,284,556 1,257,383 68,385 94.8%
2012 185,717,354 1,202,992 76,606 94.0%
2011 9,532,919 57,042 74,277 43.4%
2010 3,900,723 21,520 2,145 90.9%

So a few things stand out. Firstly, we may be looking at only 10% of the votes counted despite over 26 million views this year.

Secondly, there is a downturn in approval (thumbs up/thumps up+down x 100%) but this is can easily change, and is not inconsistent with a general trend of lower approval since 2014. Perhaps people are fed-up with the rewind series, or having it thrust upon them.



Thirdly, 2011 was an anomaly. It could be because 2010 was the first attempt at a rewind and people were less receptive. Or it could be because Rebecca Black was hosting it.

Fourthly, for #TankTheRewind to claim any sort of victory, we need to be looking at approval figures in the low 80% range or it's indistinguishable from 2016.

If we want ignore the 2011 outlier, a statistically significant (p < 5% = z-score < -1.9603) reduction in approval would require the 2017 approval to remain below 78.8%:


So far, Rewind 2017 is on track to be statistically significantly worse than previous years excluding 2011. Debate as to how important #TankTheRewind was, however, remains to be held.

... you're going surfing on the internet!

(Backdated from 19/12)

While I have to admit it was a bit of a long shot to expect much to come from the #TankTheRewind proposal, I could not pass up the opportunity to get in early just in case. As it turns out, the results were a lot less clear-cut than I expected.

At the time of writing the video has just over one-hundred and sixty-one million views (one-hundred and sixty-one million, three thousand, six-hundred and seventeen to be precise) with 3,209,209 thumbs up and 1,710,565 thumbs down. Wikipedia currently lists it as the 11th most disliked video in history (and 4th most disliked non-music video), but this has not been updated in almost a week and the 2017 rewind has since eclipsed two move videos to become 9th most disliked.

This thumbs up-to-down ratio equates to a 65.2% approval rating, well below the 78.8% statistically significant threshold outlined above; excluding the 2011 data, this year's Rewind has statistically significant disapproval.

Previous years' data duplicated from previous post and may be marginally out of date.

In fact, even including 2011, the disapproval is highly statistically significant.

Previous years' data duplicated from previous post and may be marginally out of date.

However, the role of #TankTheRewind in this is probably quite limited. The original video using the hashtag has garnered only 161,780 views--a minuscule fraction of the thumbs-down votes. A search of YouTube for the phrase lists only 4 other videos (1, 2, 3) with view counts non measured in the hundreds or less before suggesting alternate search terms, and one of these (4) has no connection to the hashtag. The three on-topic videos and the original have a combined 396,184. Assuming each view on a video was by a different person (which they would not be), none of these videos have an overlapping audience (which they probably would) and all of those who watched them were motivated to protest (also unlikely) that still only accounts for 1 in 8 down votes.

Throughout the week of data collection on the Rewind Video, top comments repeatedly complained about numerous prominent YouTubers being absent and the awkward focus of the video on tragedies, television personalities and out-dated fads. Just today I did find #TankTheRewind posts higher in the comments than any of these, though that may be dedicated TTR supporters lingering long after most viewers have clicked through.

Interestingly there is a general increase in the dislikes to likes ratio over time:



Data captured at ~24 hour increments

Unsurprisingly, as views declined so did votes, but the favourable votes dropped off much quicker, until almost on par with negative votes.


Date Views Up Down
7/12/2017 26,382,332 1,184,149 502,380
8/12/2017 27,523,277 728,995 396,677
9/12/2017 22,349,734 344,981 232,357
10/12/2017 27,035,001 447,958 251,046
11/12/2017 13,458,970 149,449 114,885
12/12/2017 8,882,526 62,901 51,559
13/12/2017 13,303,067 109,330 59,592




Interestingly, the number of viewers casting a vote dropped of dramatically too, possibly due to repeat viewings of the video.


All of this tends to suggest an early support for the video which reduced over time. This would be consistent with people changing their votes as the #TankTheRewind movement became more prominent, but could also be the result of peer pressure, a die-hard rewind fandom that jumped in early or other phenomena. There is no real way to separate the real causes of the voting trend, but it would seem that support for the video is not independent of when it was viewed. That, in itself, is interesting.

Wednesday 29 November 2017

New Day, New England

I had been hoping to produce a review of the results of the Queensland election before moving on to the citizenship-saga's by-elections, but there are still 5 seats to be called and neither major party yet has an outright majority.

So instead we will look at New England, and the new contest for Barnaby Joyce. Despite the large number of politicians invalidated from holding seats due to citizenship issues, there are only two by-elections scheduled: New England on Saturday and Bennelong two weeks later on the 16th. This is because all of the other politicians who either resigned or were found ineligible to sit by the court of disputed returns were senators, who are replaced by their party without a by-election (because a single-candidate senate election would not be a fair replication of the proportional representational system used to initially elect the senators).

I cannot find any polling for New England except an online poll from the Tenterfield Star which suggests an outrageous swing to the Greens:


The problems with this poll are many and obvious: it is an opt-in poll, it is an online poll thus skewed to a younger, internet-using demographic, it is adjusted for demographics or bias, it is open to manipulation and influence from respondents not eligible to vote, and it would seem to have a very low sample size as evidenced by the large number of candidates on 0%, the absence of any 'likes' on the poll, and five separate candidates (including the ALP) sitting on the lowest non-zero score: 1.09%. It would seem likely that 1.09% represents a single vote, and if so the sample size is less than n=100. In fact, n=92 and the Greens would have 52 votes to the Nationals' 20 and the Science Party's 6: hardly a representative sample.

Polling is hard to get right on such small seat, and is often more expensive than it's worth. Another possible reason there is not much polling on this seat is that the result seems to be a foregone conclusion. Here is the seat's history since federation, based primarily on this AEC data:



Obviously a very strong seat for the Nationals, having been held by them under one name or another since 1920 with two brief interruptions from independents: in 1922 Alexander Hay was ejected from the Country Party and lost his seat to the Country Party in an election months later, and from 2001 to 2013 Tony Windsor held the seat; he will not re-contest this election.

Since Barnaby Joyce began running in the seat he has secured over 50% of the primary vote. Prior to his candidacy, the Nationals easily polled second-best behind Tony Windsor.

Public consensus would appear to be that Joyce will be returned to his seat. In fact, on Q&A this Monday (transcript here) Assistant Minister to the Prime Minister James McGrath apparently misspoke while discussing the possibility of a banking royal commission when he said "Well, let’s see what happens next week, when Barnaby comes back." (emphasis added)

Indeed, New England does seem like a foregone conclusion and i too must predict that Barnaby Joyce will be returned.

Thursday 23 November 2017

Rainbows and Sunshine

The Queensland state election is tomorrow, but first a very brief reflection on the results of the same-sex marriage survey:

A Very Brief Reflection on the Results of the Same-Sex Marriage Survey

Polling had indicated a 64% 'Yes' vote with some 5% undeclared. Assuming this 5% split as per the declared votes, this would be around 67%. Since this was the only data we had, there wasn't much scope for numerical analysis. On a gut instinct, though I suggested around 61% 'Yes'.

The actual result was 61.6% 'Yes'. I feel pretty good about that, but since it wasn't based on anything I can't really apply this going forward.

I told you I'd be very brief.

Queensland State Election

It's the election of the only house of parliament in Queensland tomorrow, and it's reported to be a close race. This time around I'm going to try a few different approaches to how I extrapolate from the pendulum. We have two starting points: the lay of the land after the 2015 election, here, and the updated but theoretical current landscape after electoral redistribution etc. here.

As always, we will ignore the seats that are not held in a Labor/Liberal National contest when applying the state-wide swing. Obviously the relative shift between Labor and LNP popularity plays out unpredictably in seats held by One Nation, Katter and independents.

Swing from 2015 Election:

Yes, I spelled 'incumbent' wrong. I also copy-pasted that error onto most of the tables below. Lets just pretend it's a situationally valid alternate spelling, like 'capitol'.


The values in the 2015 column indicate the lead the ALP held in this seat after the election. This was after a 51.1 : 48.9 split of the public vote. The latest polling suggests the current mood is slightly stronger for Labor at 52 : 48. This is a swing of 0.9 percentage points, which is applied uniformly in the PP column.

This suggests a gain of two seats for Labor, taking them from minority to majority government assuming no change in the non-two-party contests.

To try something new, I have also added a badly-titled '%' swing column that looks at the proportion of voters who might swing. To take an extreme case, a seat with 100% ALP support under the PP column would be expected to return a 100.9% result for Labor under the PP column. The % column instead looks at the available voters that might swing. The calculation is performed thusly:

in 2015 48.9% of voters, or 489 in 1000, voted LNP in the two party preferred count. On latest polling only 480 in 1000 are expected to. This means a net total of 9 out of every 489 LNP voters has shifted to vote ALP, or 1.8%. The percentage of LNP voters in each seat can be calculated as 50% minus the 2015 margin. In the % column 1.8% of these are added to the existing Labor base.

This calculation favours the LNQ in all seats where the the ALP has a lead of over 1 percentage point, but does not actually alter the predicted outcome. (It never alters the prediction in any of the following tables either.

Swing from 2015 Polling:

There is an argument to be made that a poll is not comparable to an election result. I have therefore also calculated the swing by comparing the latest polling with the last poll before the 2015 election. This pits the latest 52 : 48 against the preceding 48 : 52, for a 4pp swing. The former is a Galaxy poll and the latter by Newspoll, but by happy coincidence the latest Newspoll is also 52 : 48, so this figure can also be used to even overlook some differences in polling methodology.


This result is calculated as above, but suggests a stronger lead by Labor.

More Recent Data:

Factoring in seat redistribution is helpful. Additionally, quite a few seats have been removed or altered so dramatically it was appropriate to rename them. This, in my opinion, makes the 2017 pre-election pendulum more accurate than the 2015 post-election one event though it is based on modeling rather than actual results.

The same methodology as above, based on the 0.9pp swing from the 2015 election, gives the following results:


And using the 4pp polling swing, we get:


Comparative Psephology

This, then, is the combined predictive result of the assumed 0.9pp swing for all current seats:


Meanwhile, using the 4pp swing we see:


Within and between these tables there are 13 conflicting predictions discussed separately, along with 9 seats not complying with the ALP/LNP two-party split.

Special Cases: Conflicts

Aspley
Aspley registers as ALP when applying the 4 percentage point swing to the pre-election pendulum from 2017. While I consider this the most reliable metric I have used, every other indication is that the LNP will hold this. The seat has been held by the LNP or its predecessors in every election except between 2001 and 2009. But general discussion of the election suggests the polarising views of the Adani Mine put Brisbane seats like Aspley further in the ALP camp and rural seats in the LNQ.

This will be a close race, and certainly one to watch. If I had to pick a winner, it'd be ALP by a nose.

Bonney
Bonney is a new seat with no data from the 2015 pendulum, and divided in the 2017 swings since ALP can win it with a 4pp swing, but not a 0.9pp swing. I genuinely cannot tell how this one will fall, so I'm bringing back the tossup.

Burdekin
Burdekin is an ALP seat by all mertics except by a 0.9pp swing from the 2015 pendulum. On the 2017 pendulum it is already an ALP seat. ALP tipped to win/hold this one.

Caloundra
Caloundra is an LNP seat in all metrics except by application of the 4pp swing to the 2015 pendulum, and has been since creation. The LNP margin increased after the redistribution. LNP tipped to hold.

Chatsworth
A seat with a mixed electoral history, tipped to fall to the ALP in a 4pp swing but not with a mere 0.9pp. Another one worth watching closely. The redistribution was advantageous for the LNP, but not drastically so and as a Brisbane seat I'll tip the LNP to come out ahead.

Everton
Everton is in a similar situation to Chatsworth, but has a long ALP history and the redistribution did not favour the LNP. ALP to win.

GavenAnother Chatsworth style seat, 4pp will topple it to the ALP but 0.9pp will not. Very mixed history, with the incumbent before this one changing party three times in as many years, a brief NAT interruption to ALP roots and recent LNP success. I think I will call this for the ALP, but I'm getting nervous that a lot of would-be-tossups are falling that way when a 50-50 split might give a better prediction state-wide.

Glass House
With a 0.9% margin after the redistribution, this is on a knife edge with a 0.9pp and ALP with anything higher. ALP to win.

Hinchinbrook
LNP since the 60s, LNP by all metrics except 4pp swing on redistributed figures (my favourite). LNP to hold.

Maiwar
Much like Bonney: new seat with no data from the 2015 pendulum, ALP can win with 4pp but not 0.9pp. Inner western Brisbane, ALP to win.

Redlands
Mixed history, ALP win with 4pp but not with 0.9. Inner Brisbane. ALP win.

Southport

In threat range of the 4pp on the 2015 figures, Southport was fortified for the LNP by the redistribution. LNP to hold.

Toowoomba North
Mixed history and marginal seat, Toowoomba North will fall to a 4pp swing but not 0.9. I'm tipping an LNP hold.

Special Cases: Minor Parties

The polling is patchy as to the fate of minor parties this election. Current distribution has Katter holding Traeger with 16.1% These margin is pretty solid, and will probably resist the major parties who are too busy edging each other out in broad strokes to target minor party electorates specifically.

Katter also nominally holds Hill with 4.9% after redistribution, One Nation holds Buderim and independents hold Pumicestone and Cairns.

Hill may be an anomaly of redistribution. I expect the LNP to do well here and take the new seat.

Buderim was gained by One Nation after an LNP defection. Although the candidate may hold his seat, I expect a LNP return.

Pumicestone is neck-and-neck for the major parties post redistribution (0.1% ALP over LNP). It is listed and IND after a unendorsement of the previous ALP candidate. ALP to regain.

Cairns is Labor-leaning by history, and the current status of independent is a result of resignation from the ALP. ALP to regain.

Also of note, Gladstone is ALP v independent on 2PP numbers, with a margin of 11.89% boosted by redistribution to a whopping 25.3%. This is the easiest call for ALP on the table.

Lockyer was a narrow LNP victory over One Nation. Polling makes this hard to call, but for the sake of entertainment I'll take a punt on LNP to retain. In ON were to win, they would likely back the LNP to form government anyhow.

Noosa is fascinating for its bi-polar split of LNP v Greens. Redistribution has the margin at 6.8%, which is respectable, and the combined Greens/ALP primary vote did not rival the LNP's in 2015. LNP retain.

Finally, Calide is of note as a former LNP v PUP seat, but the fall from grace of Clive Palmer probably ensures LNP retention here.
 

Conclusion

That rounds out the predictions.


That is ALP 57, LNP 33, KAT 1 and 1 tossup. Even if some of the special cases fall unexpectedly in favour of the LNP, I would tip an ALP majority government contrary to the "neck-and-neck" headlines we are seeing.

Rainbows and Sunshine (Results)

(Backdated from 19/12)

The prediction for this election was based on a swing to Labor of between 0.9 percentage points and 4.0 percentage points. There is no two-party preferred data for the state as a whole, but by averaging the 2PP data for the 65 seats that resulted in an ALP v LNP split, an approximated 51.7 two-party result for Labor can be calculated. This may, of course, shift dramatically when data from strong conservative (LNP v ONP) or progressive (ALP v GRN) contests are added in, but this is a mere 0.6pp swing to the ALP from the 2015 election.

On this data we would expect Labor to increase it's hold on government, but somewhat less so that predicted. For a more accurate check of our predictions, here is the seat-by-seat comparison of prediction with actual results:

*Apparently I forgot to lodge a prediction for Nicklin, but the seat was Liberal/National since foundation except for Independent Peter Wellington from 1998 to 2017 who did not contest this election. While held by Wellington, this was by a margin against the LNP, so the prediction should have been clear.
This comes to 80 correct predictions (including Nicklin) of the 92 seats predicted (excluding Bonney), or 87.0% correct.

Of the 12 incorrect predictions, five were to minor parties or independents, which are always hard to predict.

Sunday 12 November 2017

For the Love of Elections

The final resolution of the New Zealand Parliament a few weeks back once again bucked my predictions for another term of conservative government, proving that however hard predicting an election may be it’s nothing compared to the uncertainty that follows once politicians come to power.

With any luck we will fare a little better in calling the Queensland election on the 25th of this month. Before then, however, we will have the results of the same-sex marriage poll released on Wednesday, so we should have a quick look at that.

Voter Response Rate

Firstly, here is a graph for the rate of return of the survey papers.

Polling dated from the last day of data collection.

Generally, these types of polls have a margin of error in the realm of 2-3 percentage points. There are two outliers from October 2nd, however, that are well beyond this (marked in red). These results by Newgate Research and ReachTEL suggested 77% and 79% of eligible voters had returned their survey forms, about 30 percentage points higher than the previous day’s result from Essential (47%) and higher than all subsequent polls but the final one, taken one day before submissions closed.

There are many other hints that this data may be in error. The article publishing the Newgate Research figure included scepticism from Australian Christian Lobby’s director, who said the 77% figure “would surprise me”, while the source for the ReachTEL poll reported a suspiciously low 17.5% ‘No’ response. Crucially, these two polls were the only two (excluding the very first and very last) during the survey that did not ask the likelihood of voting from those who had not returned their ballots. The 77% and 79% figures are also in the ballpark of those who intended to vote in the polls taken prior to the survey beginning. I suspect, therefore, that these high numbers capture not only actually returned ballots but those who intended to return them—perhaps people who had filled out the papers and sealed them in the envelope but not yet posted them.

Whatever the reason, we will ignore these two figures going forward.


Conveniently, this line always trends positively now that the outliers have been removed, which makes sense as no one should be able to un-post their papers.

Voter Response

The latest data from Essential (see page 13) suggests that 64% of people who voted ticked ‘Yes’, almost double the combined ‘No’ and ‘Prefer not to say’ votes.

While polling has had some embarrassing moments over the last couple of years, at times wildly diverging from the actual results, this data, being based on historical fact and not subject to change like opinion, should in theory be a reliable indicator of the result.

Nevertheless, for the fun of it, let’s look at some other data collected to predict a result.

Here is a graph of support for the ‘Yes’ and the ‘No’ camps, ignoring the undecided vote (where recorded) as people either likely to not vote, or to split broadly along similar lines to those who answered:

Polling dated from the last day of data collection. Ipsos poll (9/11/2017) omitted for uncertainty. Results for dates with multiple polls are averaged.

Including the undecided/rather not say/other among polls where ‘Yes’+’No’ < 100%, the graph is similar:


This data however, is a mix of polls from people who have already voted, intend to vote or both. We can separate out some of this data accordingly:


Although voting intention seems to fluctuate dramatically, actual results seem to be quite flat and featureless compared to the preceding graphs. The available data for these graphs only begins from the start of October which, according to our very first graph, is after half of the submissions were already cast. This gave a sizable fixed baseline, which fluctuations in slowly accruing votes then had little impact upon.

Overall, there is a slight growth in the ‘No’ vote later in the survey, but not a particularly concerning one for the ‘Yes’ campaign.

The ‘other’ data among those who have already voted, presumably declining to answer rather than ‘don’t know’ as may surveys put it, is reasonably constant. By comparison, the ‘other’ among those yet to vote grows over time. This probably does not reflect voter’s wavering so much as a proportional reduction in people intending to vote. As more votes were cast those not intending to vote became a larger proportion of people in this graph.

All of the graphs examined so far would indicate a safe win for the ‘Yes’ camp, with ‘No’ never exceeding 50% and only getting close towards the final weeks of the survey among those yet to vote. By the end of the yet-to-vote graphs (22/10/2017) 75% of eligible votes had already been returned. A further 9% would be sent after this date, limiting the impact of this (comparatively) high ‘No’ support.

Two More Graphs

With these last four graphs, we can calculate an approximate value for ‘Yes’ and ‘No’ support over any given period from the slope of the dividing line(s).

For the sake of producing something more than mere graphs of publicly available data, here is a table of the daily support for ‘Yes’, ‘No’ and ‘Other’ (‘?’) extrapolated from the known polls (in grey):


And here is a table of the incremental increase in surveys returned over time:


The listed averages have been chosen to align with known points in the increase of surveys returned. Prior to October 1, at least 50% of the returned surveys had voted ‘Yes’. 18% of eligible voters replied between then and October 15, with a calculated average ‘Yes’ vote of 48.13%. 2% of forms were posted in the following 24 hours according to the polls (remembering these dates are somewhat artificial as they are collected over several days), at a calculated 50.13% ‘Yes’. The next 8% before October 22nd had 57.75% voting ‘Yes’. Similar information can be determined for the declared ‘No’ votes and the unknown ‘?’s:


A similar graph can be constructed based on reported surveys returned, but the simpler way to do this is to simply multiply the percentage of returned votes by the percentage that were ‘Yes’, ‘No’ or ‘?’ like so:


Which yields:


Conclusion

All of the polling throughout the survey has indicated a win for the ‘Yes’ campaign, and polling from the penultimate day of voting reports a 64% ‘Yes’ response—higher if some of the undeclared votes are also ‘Yes’. According to the final graph above, this not only makes ‘Yes’ more than 50% of the returned votes, but more than 50% of the issued votes. In other words the ‘Yes’ result would be greater than the ‘No’ result (including all undeclared votes) and the ‘did not answer’s combined.

Previous experience, particularly the British experience with Brexit, has left me a little wary of polling. It is tempting to suspect a hidden vote as we saw in the US presidential race, the Brexit vote and the UK general election. These hidden votes came from both left and right, but always in favour of the underdog, as though people were ashamed to admit they were voting for the less popular option.

On the other hand, over-cautiousness about this exact issue proved unfounded in New Zealand. In NZ, the people expected to vote voted. In the other elections we saw an atypical and unpredicted surge in voters from the disenfranchised working classes voting for Trump, whipped-up nationalists voting for Brexit and politically engaged youth voting for Corbin. The question, then, becomes one of whether we’ve been polling the right demographics in the right proportions, something we cannot know until Wednesday’s result.

However, there are two good reasons in my mind to trust the polling in this case. Firstly, most of the polls offered an option to remain uncommitted or not answer. This allowed a pollster-shy voter base to be captured without declaring their position. If there is a hiding ‘No’ vote I would expect it to be mostly contained in the thin grey bar in the above graphs.

The second reason I trust the polling data is that this is not an election held on one day. Unlike all of the bad-polling examples, this vote was conducted over almost two months. As a result we not only have data on how people intend to vote, but on how they claim they actually did vote. This data is free from late-season changes in attitude or people intending to vote not getting around to it.

That said, completely aside from any science or reason, I will slightly hedge towards a stronger-than-expected ‘No’ vote based on nothing but gut feeling. The last polling has ‘No’ at 31%, plus a 5% undisclosed result. If this all went to ‘No’ it’d be a 64:36 (or 16:9) victory for the Yes camp. I’ll go a little further still and predict something in the order of 61:39, but nevertheless by all accounts including my own, we should see a clear ‘Yes’ result on Wednesday.