The now infamous Lake Research Partners survey on horse slaughter in the United States, conducted on behalf of the ASPCA, shows that voters oppose the slaughter of American horses for human consumption overwhelmingly. The opposition is strong among those who own horses and those who do not, as well as across every key demographic and geographic group, and across political party lines.
Check the survey itself to see how it is stratified across the various demographics. What does it really mean? Well, to start off, it should be known that surveys conducted by professional research firms are designed and interpreted by statisticians and social scientists, and unless we’re talking about a complete census, they all use some form of random sampling.
1) What makes a survey such as this more valuable than one by Survey Monkey installed on a website?
The fact that it is randomized and not placed on a website where only people visiting the website can see it and vote. The ability to vote multiple types is eliminated because only one phone call is made and only one vote can be cast. Survey Monkey is simply a “quick and dirty” way for marketers to ask questions, often in direct mail campaigns. They are only as accurate as their distribution methods. On the other hand, surveys conducted by professional research firms are used to:
They don’t – they will conduct as elaborate and expensive a survey as their client wishes to pay for. So unless that client is the government, which has very deep pockets, there is going to be a budgetted expenditure. Included in the survey cost will be such things as the labour and material costs for designing, testing, meeting with the client to define needs, printing the questionnaire, providing pre-stamped return envelopes or budget for phone costs (in the case of this survey, which was a phone survey of registered voters).
Once the information is returned or collected if a phone survey, it must be recorded and analyzed by qualified individuals. A survey of around 1,000 people is both quick and economical – the well-known national polls frequently use samples of about 1,000 persons to get reasonable information about national attitudes and opinions.
3) How can a survey of 1,000 people provide a true representation of an entire country’s view?
No survey, unless it is a complete census (a complete survey of 100% of the population – EXPENSIVE and TIME CONSUMING) can depict the true population sentiment. Analysts find that a properly randomized survey with a moderate sample size is statistically significant. The sample size required for a survey partly depends on the statistical quality needed for survey findings; this, in turn, relates to how the results will be used.
4) How can I be sure the question isn’t leading people to the answer?
In this case, the question is straight-forward – “Would you say you approve or disapprove of ALLOWING American horses to be slaughtered for human consumption? [IF APPROVE/DISAPPROVE]: Do you feel that way strongly or just somewhat strongly?[IF UNDECIDED]: Well, which way do you lean?”
The question doesn’t ask you if you “…..approve or disapprove of allowing American horses to be shipped long distances and slaughtered inhumanely….,” or “wouldn’t you say it’s about time we ended slaughter?” which would also be a leading question. Phrasing an opinion question this way leads the respondent to a “yes” answer and a distorted or biased perspective of the public’s views on the issue. Also note that the question is fairly short and uncomplicated. People are more likely to cooperate if the questions are simple, clear, easy to answer, and personally relevant to them. Most surveys are written at a grade-school level as well, for simplicity’s sake.
In a bona fide survey (which is not a Survey Monkey offering), the sample is not selected haphazardly or only from persons who volunteer to participate. It is scientifically chosen so that each person in the population will have a measurable chance of selection. This way, the results can be reliably projected from the sample to the larger population. This survey started with registered voters as its base, so if a lot of either pro or anti-slaughter advocates are unregistered, they would never be contacted. Quite possibly a random dialer was used to reach the respondents.
6) What errors can confound (produce false results) in a survey?
This survey had a 3.09% margin of error (MOE). That means that if you asked a question from this poll 100 times, 95 (or whatever the degree of confidence calculated for a survey, usually 95-97%) of those times the percentage of people giving a particular answer would be within 3.09 points of the percentage who gave that same answer in this poll – so the poll is likely to only vary either upwards or downwards of the 80% by 3.09 percentage points. If you conduct the same poll 100 times you are not going to see a complete reversal in the trend either. In other words, there would be relatively little variation no matter how many times you conducted the same survey, because it has been effectively randomized to include people from various socio-economic climates, political affiliations, etc. etc. Why 95 times out of 100? In reality, the margin of error is what statisticians referred to as a confidence interval. The math behind it is much like the math behind the standard deviation. So you can think of the margin of error at the 95 percent confidence interval as being equal to two standard deviations in your polling sample.
7) What effects the Margin of Error?
Sample size is important to avoid negatively impacting the margin of error. In sampling, a sample of 100 will produce a margin of error of around 10 percent.... Of course, this observation is consistent with the MOE we see in the Lake Research sample – 3.09%. This illustrates that there are diminishing returns when trying to reduce the margin of error by increasing the sample size. In order to reduce the margin of error to 1.5%, the research firm would require a sample size of well over 4,000, which of course increases time and cost to the client.
The type of sampling affects the MOE because the survey designer can control the design or the survey. For example, if the phone respondents were not adequately randomized, or the survey designer elected to call people only from his or her regional phone directory, that would be consistent with a poorly designed survey because it wouldn’t be random, and it would exclude people with unpublished or unlisted phone numbers. By randomizing the phone numbers, you can still contact people with unlisted phone numbers, but not people without phones of course!
The sample population is merely the total number of individuals from whom to choose for your survey or poll. Proper sampling designs involve defining groups, or strata, based on characteristics known for everyone in the population, and then taking independent samples within each stratum.
Another type of statistical error that can confound a survey is “non-sampling error.” Not everyone will respond to a survey, or sometimes they won’t tell the truth. But the estimate will probably still differ from the true value to some degree, even if all non-sampling errors could be omitted. Unless a large number of respondents decided to lie, this would not substantially affect the survey.