Scoring System

We has four question types. We wanted to compare all of the questions in the survey in an objective way. We wanted to take into consideration not only those who supported an idea but also to incorporate into our score those who opposed an idea.

Why a 400-point scale?

We started with the Support/Oppose type questions, which make up more than half of our core questions.  These questions set our scale. If 100% of the people chose Strong Support, then that question would receive 200 points; 100 times 2. If 100% of the people had selected Strong Oppose, then the question would receive -200 points.  The range between 200 and -200 is our 400-point scale.

To make our question types comparable, we chose to cause each of the other question types to conform to the same 400-point scale. How we did that is detailed in each of the specifics for that type of question.

This would have been easy if all questions were of the same type. They were not. If we were to just show the percent that supported an idea, then we would end up with some question types float to the top of the lists while some question types would sink to the bottom of the lists. To account for the different nature of each question type it is necessary to use the methods detailed below.

The Short version – The more detailed version is below this summary.

Check all that apply

1 point for each check mark
-1 point for each option that was not selected
0 points for each Neutral or Other

Support/Oppose

2 points — Strong Support
1 point — Support
0 points — Neutral and Other
-1 point — Oppose
-2 points –Strong Oppose

Pick 1

1 point for each option selected

Weighted score on the theory that a vote for one out of three options is worth more than a vote for one out of two options.

Ranking

Points were assigned to each option a respondent chose to rank. The number of points is determined by the options available in that question.  The points are set to be on the same 400 point scale that we used for all other question types. The first rank always receives 2 points and each subsequent option selected receives a proportional number of points between 0 and 2. Those selecting N/A elicited a negative 2 points.

The detailed explanation of how we scored each question type.

Work notes in the form of Excel worksheets with the formulas for each computation are available upon request.

Different number of responses for different questions.

Do we take into consideration the number of respondents per question? Yes. We do this by using the percentage that are for or against something rather than the specific number. Because we had a sufficient number of people for every question to put us within a narrow margin of +/- 4%, then whether there were 388 people that answered a question or 500 that answered a different question, the percentages associated with each are very close to each other.

Re-scaling to a 400 point scale

Each question type resulted in a different score range. For the questions to be comparable, they have to use the same 400-point scoring range that we used for our report.

Check All that Apply

We had eight “Check All that Apply” questions. Some of these questions had an option for a person to check that they didn’t like any of the options. Most did not choose to check that. To apply the same logic to these questions as we did the others, we wanted to incorporate into the results those who opposed an idea.

We looked to see if the question had an option for specifically saying none of the other options. If it did, we used that as our opposition number. If the question didn’t, then we made the assumption that a person opposed the unselected options. We also took into consideration all of the people who selected “other.”  So, we looked at the number of respondents for the question, subtracted all of the selected options as well as the “don’t care” and/or “other” responses, and treated the resulting number as those opposing the option.

So, this gave us the percentages of those who were for or against an option. We multiplied the percentage “for” something by 2 and those opposed to an option by -2. This, then, gave us a sum score for each item that is on the same scale as the other questions in the survey.

Support/Oppose

Over half of our non-demographic and non-open-ended questions were of the Support/Oppose type. It was this question type that drove our scoring system and what we caused all other question types to conform to.

2 points — Strong Support
1 point — Support
0 points — Neutral and Other
-1 point — Oppose
-2 points — Strong Oppose

The points that we assigned to each option here naturally results in a 400-point scale. If 100% of the respondents said they have Strong Support for a proposal, then 100% * 2 points = 200. Conversely, if 100% of the respondents said they have Strong Opposition for a proposal, then 100% * -2 points = -200. The difference between 200 and -200 is our 400-point scale.

.

Pick 1 scoring system

As the number of options available to choose from increases, the percentage each is likely to receive is diluted. Questions with a large number of options are therefore likely to sink to the bottom of any comparison list as they relate to questions with fewer options. For a fair comparison to happen this bias needs to be compensated for.

To do this we use the two option question as the base. As the number of options in a question increases, the probability of any one option being selected is diluted. The formula for this is (n/2)-1 where the n is greater than 2. N represents the number of options in a question. We then multiple the variable, which in this case, is the percentage of people supporting an idea by the result of our formula and convert that from a percentage to a real number by multiplying by 100.

We converted all question types to the same 400-point scale so that they can be objectively compared. We did this using this formula: (((new max-new min)*(variable-old min))/((old max – old min))+new min)

This causes the adjusted Support % to convert to a 400-point scale, which is the same scale we use for all of the other questions.

The spreadsheet used is available upon request.

This graph shows the results of the weighted we gave to the Pick 1 questions. The most noticeable adjusted score is question 31 A. This is because it is a question with 5 options and benefits the most from this system. The blue is the score what the option would have received if we did not weight the system. The orange bar is the weighted score we used.

Click image to enlarge

Ranking

We used Survey Monkey to do something that it was not designed to do. We used the ranking function of Survey Monkey to not only tell us which option people ranked higher than another, but also asked the respondents to check the N/A if they did not like an option. This allowed someone to cast a negative vote for an idea as opposed to just ranking it lower than something else.

To account for this negative vote, then, we have to manually calculate the score since Survey Monkey only scores “supported” ideas. We wanted to both score ranked ideas and to account for those who didn’t like the ideas.

To do this, we multiplied the percentage of respondents for a particular option by a number between 2 and 0 such that we were on the same scale as the other questions. We assigned the ideas that were marked N/A a -2 score. This results a 400 point range for our scores. 

When all of the numbers were totaled, we found that some of the options had so many people selecting the N/A that the idea had a negative score. For graphing purposes, we, then, proportionally adjusted the scores so that all numbers were positive. This way, the graphs make more intuitive sense. We did not make the proportional adjustment when comparing the questions using methods other than graphs.

Our adjustments and scaling have no effect on the order in which stats are presented. The way to read the ranking graphs is to note that, of the ideas presented, the leftmost options are the ideas people prefer as compared to those further right. Also, consider the number of people marking the option with an N/A. The higher the red bar, the less people think this is an appropriate option for the question.