Archive for March, 2010

Brittany Kachingwe


Reading Response 3

Austin & Pinkleton: 243-250, 259-261

Austin and Pinkleton discuss the various features of a questionnaires design that affect the response rate that it will receive.

–       Introductions: Both mail surveys and telephone surveys must have introductions that include: what the study is about, who the sponsor is, how the results will be used why responds should be part of the study, and whether or not their responses will be confidential or anonymous.  Mail surveys have cover letters while telephone surveys have shorter (two sentences) introductions.

–       Prenotification Cards or Letters: lets the respondents know in advance that a survey will be coming, making them more prepared. Postcards are better than letters

–       Follow-up Mailings: reminders raise response rate from 5%-40% to 75% or higher

–       Incentives: monetary incentives given before rather than after to encourage the person to respond.

–       Sensitive Questions: at the end so they have already filled out most of the survey if they choice to quit

–       Encouragement: during telephone surveys constantly thank the respondent and tell them that end will come soon

–       Response set: scatter concept questions instead of clumping them so questions aren’t answered similarly

–       Length: mall-intercept should be 5×8 cards. Phone surveys no longer than 40-50 questions or 5-8 minutes

–       Interviewer Directions: enthusiastic, polite, confident, and clear to all

Although no perfect survey exists, Austin and Pinkleton believe that these nine categories will help increase both response rates and the validity of the responses.

Gawiser & Witt: Chapter 12

Although sampling error is the main type of error that could affect a poll, Gawiser and Witt discuss how there are several other sources that could be problematic to polls.

–       Question order: attention to how the questions are ordered in a survey should be careful so as to avoid errors

–       Non-response: one group of people not responding to a questionnaire when another group does forms bias results.  It is one of the hardest errors to measure because it is outside of the researcher’s control.  There is very little that can be done to correct or adjust although you can weigh the responses by adjusting the results for demographic imbalances in the sample.

–       Background Noise: try not to report results on a topic that is not well known because it is hard to evaluate the level of information of the subject

–       Interviewer-caused errors: a lot of training goes into making sure interviewers know how all about the survey and how to ask the questions in an unbiased way.  Proper training and supervision should limit error

–       Data Processing errors:  the way answers are recorded on paper or into a computer can have errors but is limited by Computer-Assisted Telephone and Personal Interviewing but the software can have process errors as well

Gawiser and Witt: Chapter 5

Polls today:

–       Thousands of private and public polls are done each year to help companies and candidates make decisions

–       Polls are conducted by household companies, polling firms, the news media, academic organizations and the federal government

–       Subject matter ranges from poll to poll.  Some have several topics in one poll while others focus on a single subject.

–       Questions can vary as well from easy to answer to lengthy questions on broader topics.

–       Polls are normally reported by journalists with training in polls, poll analysis and the reporting requirements.

–       Epidemiological surveys are used to analyze the relationships between population characteristics or behaviors and diseases.

–       Audience and subscriber research helps broadcast media see how on-air personalities are rated

–       Pseudo-polls are non-scientific polls (SNL)


Read Full Post »

Brittany Kachingwe


Reading Response 2

Austin & Pinkleton: Survey Research (ch.10)

In Austin and Pinkelton’s chapter on survey research, they state that more and more organizations feel the need to understand the attitudes and motivations of they target audience through the use of surveys.  They say that there are four ways to go about receiving survey research: mail surveys, telephone surveys, online electronic surveys and personal interviews.

  • Mail Surveys
    • Mail surveys are questionnaires that are sent to a sample of individuals through he US mail and then are sent back to the interviewer.   It is important to attach a well written, short/to the point, and self-explanatory cover letter with instructions so as to motivate the participant to not only fill it out but return it as well.  Follow up letters are sent to encourage responses and the questions themselves are made as self-explanatory as possible.  Although they are among the least expensive, provide a high degree of anonymity and allows no interviewer bias, the response rate stays low (5-40%) and can take up to 8 weeks for results and also has no questionnaire flexibility.
  • Telephone Surveys
    • Telephone surveys are randomly placed phone calls to household phones in order to retrieve data.  This requires carefully trained interviewers who pay attention to the introduction (must engage), read the questions exactly as they are written, and keep a call record.  Pre-contact increases the response rate, as well as the amount of call backs placed (3 call backs = 75% contact and 6 call backs = 95% contact.  Random-digit dialing (RDD) is used to provide an equal probability of reaching a household with telephone access.  They used to produce up 90% responses but is declining because of technology (answering machines, caller ID, cell phones).  They are inexpensive, the data can be compiled quickly and the rapport with the respondent increases the compliance.  However, interviewer bias can occur, not every household has a telephone and there is limited interview flexibility.
  • Online Electronic Surveys
    • Through email and other survey sites, this inexpensive method allows for no interviewer bias, fast responses and a wide sample. Problems do arise because of limited inernet access, low response rates and lack of computer knowledge or special software. I am a member of www.valuedopinions.com and I am emailed different surveys once a week and I’m compensated if I fill out the whole survey.  They ask me several questions before the actual survey to see if I’m a fit for the survey and if I am I can continue and earn money.
  • Personal Interviews
    • Face to face interviews allow for a higher rapport, higher response rates and higher quality data/flexibility. Unstructured interviews ask broad questions that lack high degrees of validity and reliability.  Structured interviews are a predetermined order of questions making for less freedom to deviate from the questionnaire.  They tend to be expensive because they require a lot of interview training there tends to be interviewer bias as well.  Some variations include group-administered surveys, computer-assisted personal interviewing and mall intercept surveys

Gawiser & Witt: Timing is Everything (chp.10)

Gawiser and Witt talk about how timing is everything when it comes to producing a poll.  John Mueller talks about the “Rally ‘round the flag” phenomenon, which shows how presidential approval ratings, are increased during times of crisis’s (G.W. Bush during 9/11).  One can manipulate poll timing to benefit an election by taking a poll after several ads of the candidate has be aired on television.  Instant polls are taken directly after an event, ie: an election, sports game or crisis.  “Panel-back surveys” interview a respondent before and directly after an event, which allows for a measure in the changes in opinion.

Gawiser & Witt: Who Did It? (chp.6)

Gawiser & Witt talk about why it is important to know where a poll comes from and who wrote it.  It is important I know where it comes from because sometimes a story or press release will not have all the information needed to decide whether or not the poll should be used.  It is also important to know where the poll comes from to determine if it is a reliable source.

Gawiser & Witt: Who Sponsored It? (chp.7)

Gawiser & Witt say that it is important to know who sponsored a survey because it allows the journalist to make a judgment as to reason why the poll was conducted in the first place, and on the poll’s overall validity.

Read Full Post »

Tricia Dean

The assigned reading from Austin and Pinkleton’s book discussed how the design of the survey or poll can directly impact the response rate. The first reading section went through the order of a survey mentioning how the introduction should be used to motivate the respondents to cooperate and how the length of the introduction depends on the type of survey that is being conducted. Other ways to increase the response rates would be to provide incentives for completing the survey or sending out prenotification cards. Providing encouragement, having the appropriate amount of questions and saving sensitive questions for the end of the survey are the final additions that Austin and Pinkleton provide to increase response rates.
The second section of their book discussed the different rates that should be used when reporting the responses of the survey. Austin and Pinkleton explain the differences between the total sample size (total number in the study), valid sample size (after removing invalid numbers and addresses), completion rate (number of respondents who complete the survey out of the total sample size), response rate (number of respondents who cooperate out of the valid sample size), refusal rate (declined to answer out of valid sample size), and the noncontact rate (number of people who could not be reached out of the total sample size).
The required reading from Gawiser and Witt’s book, chapter 12, discussed the different errors that can occur when conducting a study. They mentioned that the majority of errors in a study are subtle and unable to be accounted for in a quantifiable manner. The question order error argues how preceding questions may impact the respondent’s answer to the questions that follow. The nonresponse bias occurs when a specific group refuses to participate and the Literary Digest’s crisis was an example where democrats did not participate and this error was not taken into account when reporting the findings.
They also mention the impact of background noise and how many people will answer a question that they have no background knowledge about. Finally, similar to some of the information provided by Austin and Pinkleton, Gawiser and Witt discuss the impact of interviewer-caused errors that may range from their reactions to answers, their tone of voice when asking certain questions, and the effects of poor training.
The Understanding and Interpreting Polls course discussed the special issues that journalists may encounter regarding response rates, advances in technology and the ethics behind political polling. When calculating response rates, it is important to consider the length of time that the poll is administered over because this allows for more callback attempts and important news events that may influence responses. Furthermore, the increase in cell phone use has created a movement away from landline phones which may influence a journalist’s ability to connect with a respondent. Finally, the course touched upon the concept of “push polls” and how political campaigns often utilize them to spread negative reactions towards their competition or positive reactions towards themselves. These polls do not use random samples, often ask a very small amount of questions that are strongly written in one direction, and hardly ever report their findings.
In my own opinion, it can prove to be frustrating to conduct a survey knowing that there will always be a certain level of error in the response rate. Learning about the various ways that surveys can be interpreted as bias or produce biased answers is a little intimidating as a future professional within the marketing world. In my own experience, I have conducted a focus group in my Marketing Research class and we worked really hard to ensure that our questions did not mislead the respondents or appear to favor one answer over the other. On paper, our survey seemed to be as unbiased as possible. However, when the questions were asked aloud in the focus group setting, there were times when the tone of voice of either the question administrator or fellow responders that seemed to impact the responses of the participants.
To provide a more specific example, we asked a question about the differences between going through a private landlord or going through a rental company (i.e. Apartments Downtown). There was a slight majority that actually had rented from Apartments Downtown and three of them had a lot of negative feedback regarding the company. I noticed that the other two respondents who had rented through Apartments Downtown were more quiet and reserved and did not provide much input. I am pretty sure this is due to their positive experience with Apartments Downtown and they were too afraid to go against the apparent majority to voice their opinion in the group setting. Of course, due to time restraints, we were unable to also go through and poll each participant so it was impossible to tell if that was indeed the motivating factor in their silence or if we had actually come across a polling error.
An example of a real poll story that relates to both question order error and the impact of important news events that occurred between the time periods that the survey was conducted would be the survey of the American reaction the U.S. Supreme Court not instating a spending limit on political or union campaign spending . The survey opens with questions related to their approval or disapproval of Obama, the recession, and the health care reform. Later in the survey, respondents are questioned on their support of not capping the spending on campaigns. In my opinion, I would probably be affected by the earlier questions that asked about the recession and the poor economy. To elaborate, I think the fact that I have a negative view of the economy and understand the need to be frugal in today’s market; I would have a far more negative view regarding allowing political and union leaders spending as much money as they want on their campaigns. There are far more useful places that the money could be going, like Haiti, rather than to support more unethical “push polls”. That would be an example of the question order error.
With regards to the time period that the survey was administered, the approval rating of Obama has continued to decrease overtime from when the poll was first sent out in February of 2009 to February of 2010. In what was close to a full year, Obama has pushed for the health care reform as well as reneged on his promise to pull troops out of the Middle East. Depending on the views of each respondent, these are substantial events and actions that Obama has taken that I am almost positive have directly affected his approval rating.
All in all, I have learned that it truly is impossible to conduct a survey with a 100% response rate, nonetheless one that does not have any errors. However, by being more informed of these biases, errors, and various rates, I will be able to more accurately report my findings because I am aware of these issues.

Read Full Post »

Kata Welsh

30 Mar. 2010


Reaction Paper 4

Analysis of Poll Data

Conducting a good poll requires the most efficient and effective surveying system. With technology advancing, poll calculating is becoming much easier with the use of sophisticated software systems. Two of these types of systems are computer-assisted telephone interviewing (CATI) and computer-assisted data collection (CADAC). These data entering systems make collecting and recording survey data much less confusing and inconsistent, according to the reading using these systems can reduce input errors by 77%. In addition to easy entry of data, CATI and CADAC make it easier to interpret the survey’s results. Instead of offering the same list of questions to those who do and do not apply, these systems can automatically update a new set of questions directed at how the previous one’s were answered. This electronic step has created a new era of poll conducting; the CATI and CADAC systems are now common among most professional research firms.

In some cases the wording and set up of questions within the survey can make a difference in the way the question is interpreted, and in turn, responded to. Directionality, data entry conventions, and coding help to keep the interviewer on track of these responses in a particular way. Directionality is how a numbering system, for example, can symbolize a particular stance on an issue based on the question asked and how the ranking applies to the numbers. In these cases the positive response should always be attached to the highest number. Data entry conventionality is sticking to conventions of how certain questions are routinely asked. Coding is a system for interviewing personnel, typically for phone surveys, that guides the questionnaire along replacing unseen information from the respondent with abbreviations. The example given was an indicator for “refused to answer” abbreviated RF. Another important aspect of surveying is having properly trained interviewers to conduct your survey. There is an array of qualities needed and distinct rules to be followed in order to effectively conduct the survey.

Another type of coding that surveyors operate with is edge coding. This type of information is set in the margins to help the interviewer as well as the survey manger after the collecting process has been concluded. The first part of information known as edge coding is the tracking information. This includes the interviewer, the date of the interview, and any other filing type information necessary to keep record. Next included is the question information which assigns the responded answers a location in the computer, it is reduced to code and abbreviated into a short sequence of capital letters. Finally the answer codes are another set of codes used by the computer to easily locate and edit answers in case of an input mistake. Also included as side notes to the survey is the call sheet. The interviewer uses this sheet as a record of previous interactions with a specific caller as well as to take notes for future contact.

After collecting and analyzing survey data, the issue of reporting can create different interpretations. Depending on the relationship of the data either univariate or having many variables, the representation of the information must be clear and able to be clearly interpreted. Through the process of creating, conducting, and analyzing, polling leads lots of room for error and it should be done with much caution and preparation.

I feel the computer software has created a much more solid reassurance in the accuracy of polling and surveying. Questioning can be conducted much more smoothly, efficiently, and quickly. The information presented can then be more depended on knowing it has been routinely checked and conducted due to such features as edge coding and extensive interviewer training. There is a lot effort put into achieving the most reliable and representative information through all the different behind-the-scenes work. Knowing all of these techniques and precautions is reassuring to those who later interpret the polls not only for what they are, but for how they were assessed.

The Washington Post illustrates a prime example of these kinds of methods through their methodology towards polling on their poll unit website. The article explains The Post’s use of computer-assisted telephone interviewing (CATI) to program the survey. It also references the use of the TNS research firm to help “programming the survey questionnaire, fielding the survey and overseeing interviewer quality, and processing and weighting the data.” It even explains the interviewer training process consisting of, “a two-day orientation/training program on interviewing skills, including practice sessions with live respondents” as well as being intensely monitored by a senior staff member for two weeks. This process closely resembles a lot of what was mentioned in the readings as what was appropriate and expected of poll taking services.


Austin, Erica Weintraub., and Bruce E. Pinkleton. Strategic Public Relations Management:             Planning and Managing Effective Communication Programs. Mahwah, N.J.: Lawrence             Erlbaum Associates, 2006. Print.

“Washington Post Polls: Detailed Methodology.” The Washington Post.                                                    WashingtonPost.Newsweek Interactive, 31 Mar. 2009. Web. 26 Mar. 2010.             <http://www.washingtonpost.com/wp-            dyn/content/article/2009/03/31/AR2009033101229.html>.

Read Full Post »

Kata Welsh
25 Mar. 2010
Reaction Paper 3
Response Rate
The design features of surveys can heavily affect the response rate in a variety of ways. There are many different features that should be considered when creating a survey to ensure most efficiency and highest response rate. Depending on the type of survey, there are different considerations that must be put into effect. The survey’s introduction is one feature that can make or break survey respondents. In a mail survey introduction, compared to a telephone survey one, the introduction must be a detailed description of the uses and intentions of the survey, whereas the telephone introduction must be brief and straight to the point to avoid hang ups. Also to be considered is timing the call when conducting telephone surveys. Generally between 6 and 9 p.m. on Sunday evenings is acceptable.
Many methods also exist to encourage responses. Prenotification cards and follow-up mailings before and during the surveying period may remind and influence respondents to get to the survey that they may otherwise never open or remember to fill out. Another impact on responses is the availability of incentives to the respondent. More easily awarded to mail surveyors than telephone respondents, incentives usually range from gift certificates, product samples, or monetary incentives typically under $10.
To keep respondents lively and interested in continuing and finishing the survey, tricks such as delaying sensitive questions, keeping to a specific and desired length, and consistent encouragement help to ensure positive reactions towards the survey. Specific interviewer directions as well as limiting response sets and accounting for language and cultural diversity can avoid confusion and hopefully end in the most accurate results.
When reporting the information found through conducting a survey there are six key elements to take into account. Total sample size, as compared to the valid sample size, where all members requested to participate in the study are counted opposed to only those available to respond. Out of this, surveyors can calculate four different rates. The completion rate should be presumed before hand to conclude how many will actually complete the survey out of the total sample. The response and refusal rate count who responded and who declined out of the valid sample, the nonresponse, and the nonconduct rate represents those unable to participate from the total sample.
As read in Gawiser and Witt, not determining these factors can result in error. Either through lacking certain demographics due to nonresponse, data processing computing errors or bias in question ordering or interviewer interference, errors arise. In response to these errors the only real option is to adjust to them. Methods such as weighting can help make the sample more closely resemble the population.
I personally can identify with the benefits of these methods to ensure the likelihood of responses. Each tip and trick makes sense to attract and invite positive responses and confirm a smooth questioning process. When responding to phone surveys or mail surveys, it is more promising that I would participate if the previous steps were taken. The only part I would take issue with is the continuous reassurance. Most people would get annoyed by constant praise and complement for participation in the survey, especially when it does not seem the most honest. That would be the other downfall in this view of increasing response rates. Especially in phone surveys, the emotion from the interviewer does not always seem the most honest and truthful. Most of the time I’m not convinced these interviewers are legitimately concerned for my personal well being, but only for the information they hope to receive from me. It sometimes seems as if I am simply a means to an end for them. It is good to know, however, that these issues are taken into consideration to help ease the process for both interviewer and surveyor.
As a real life response rates example, The Los Angeles Times illustrates their methodology towards polling on their USC College of Letters, Arts, and Sciences poll. They explain how, like explained earlier, to counter polling error, the data is weighted to census regional and demographic characteristics. They also indicated their employment of bilingual dialers to satisfy both Spanish and English speakers as an added convenience feature. Their policy on dialing and redialing in cases of nonresponse telephone surveys is to call each number up to five times permitting callback rescheduling and redialing initial hang-ups. The LA Times’s policy clearly demonstrates what the two sets of authors tried to explain in today’s readings.

Austin, Erica Weintraub., and Bruce E. Pinkleton. Strategic Public Relations Management: Planning and Managing Effective Communication Programs. Mahwah, N.J.: Lawrence Erlbaum Associates, 2006. Print.
Gawiser, Sheldon R., and G. Evans. Witt. A Journalist’s Guide to Public Opinion Polls. Westport, Conn.: Praeger, 1994. Print.
“How the Poll Was Conducted.” The Los Angeles Times. Tribune Company, 8 Nov. 2009. Web. 23 Mar. 2010. .

Read Full Post »

Kathleen Graham
30 March 2010
Qingjiang Yao

Reaction Paper #3
In the readings for today, we learned about analyzing poll data. One major thing that was stressed was how research managers need to think ahead to analysis and research strategies. Other things that were introduced included designing surveys for easy data entry, reporting univariate relationships, reporting relationships among variables, and more.
When designing surveys for easy data entry, CATI or computer-assisted data collection can be a wonderful option. As described in the reading, “CATI can eliminate the need for cumbersome data-entry procedures” (Austin, Pinkleton, 252). In this type of survey the computer automatically updates the database when new information is given. CATI has also become very popular for personal interviews, especially those that cover sensitive topics.
Directionality is also extremely important when designing surveys for easy data entry. As described in the book, “directionality refers to the match between the numbers that represent answers to questions in the computer and the idea they symbolize” (Austin, Pinkleton, 252). An example of this would be always using a high number to indicate a positive response.
When preparing surveys, some designers include numbers on the questionnaires, and some use what are known as code books. According to the reading, “the code book is a copy of the questionnaire with annotations that direct data entry personnel and analysts” (Austin, Pinkleton, 252). One advantage of putting the numbers directly on the questionnaire is that personnel are constantly reminded of necessary codes. On the other hand though, they must be careful that the survey is not too cluttered with numbers that it becomes difficult to understand.

In some cases, margins are used as a place to put coding information. These codes include tracking information, question information, and answer codes. An important thing to remember though is that corrections will always be needed. The process of fixing these mistakes in the data set is known as data cleaning.
Another thing we learned about through today’s readings was the reporting of univariate relationships. As described in the book, “the minimum information usually required for a research report includes frequencies, percentages, means, and for some clients, standard deviations” (Austin, Pinkleton, 261). Frequencies are helpful because they tell clients how many people answered each question using each response. In addition, frequency tables usually include both the number of people who did and did not answer the question, and they include percentages making it easy for the reading to make comparisons. Frequencies can also be shown visually by using tables, bar charts, histograms, or pie charts.
When reporting relationships among variables, or comparing results for several groups, the crosstab table usually provides the most insightful format. They usually present frequencies in column or row percentages, and the interpretation of the data varies depending on which layout is chosen. One must be careful if displaying both rows and columns though because it can be very confusing. Another option to present relationships is through correlation coefficient which is useful in examining the strength and direction of relationships between two interval-level or ratio-level variables.
One part of the reading that I definitely agreed with was the fact that CATI or computer- assisted data collection eliminates a lot of confusion. Since telephone or personal interviewers enter data directly into the computer as they talk to you, it is much more efficient. On the other hand though, I don’t think that I would be very comfortable reporting about sensitive topics to a complete stranger. In the book, it talks about how CATI is becoming more popular for personal interviews, but I think that I would not feel comfortable reporting my behaviors is the subject were too personal.
When searching for a poll to support by views on CATI, I found something extremely interesting. In an experimental study that assessed the smoking behavior among 12–17 year olds in California, two survey methods, telephone audio computer-assisted self-interviewing (T-ACASI) and computer-assisted telephone interviewing (CATI), were used among interviewees. In T-ACASI, the participants listened to prerecorded questions and responded by pressing a number on their telephones. In CATI, interviewers administered the questions and entered the youths’ responses into a computer. In the results, it showed that youth may be more likely to report their behaviors in a self-administered survey like T-ACASI compared to an interviewer-administered survey like CATI (Public Opinion Quarterly 2004). I found this interesting because apparently there is an even more private way to report sensitive behaviors.

Read Full Post »

Kristin Callahan
March 30, 2010
Qingjiang Yao

Reaction Paper 2

     Austin & Pinkleton begin Chapter 12 by explaining the simplicity one should have when answering, analyzing, and reporting surveys. Given that the majority of the work goes into the actual writing of the questions themselves, designers also need to keep analysis and reporting methods in mind. The speed and reliability of the results can only increase by doing this. Therefore, planning what results will need to be reported can aid analysis and guarantee that data are collected in a way that makes it possible to create the necessary tables and charts. However, in order to do all of this, the first step is to design a survey for easy data entry. While reading, I was introduced to several formats that can be used to create easily analyzed questionnaires. One of the most prominent systems used by several survey research centers is CATI, computer-assisted telephone interviewing. This is a system in which telephone or personal interviewers enter data directly into the computer as they talk with respondents (Austin & Pinkleton). CATI, or another similar system known as CADAC, computer-assisted data collections, can reduce or eliminate the need for unwieldy data-entry procedures by their straightforward procedure. Interviewers using one of these systems typically wear a headset and ask questions given by the computer. As they type in the answers from each respondent, the computer automatically updates the database to incorporate the new information. As a result, these systems prove favorable because survey computer-aided date entry can reduce date entry errors by as much as 77% (Austin & Pinkleton). In addition, it can lessen confusion on more complicated questions which often occurs throughout questionnaires.
     Another area designers need to pay attention to when creating surveys is the directionality. According to Austin & Pinkleton, directionality refers to the match between the numbers that represent answers to questions in the computer and the idea they symbolize. Designers may assign any number to any answer. However, many people are familiar, and comfortable, with seeing things correlating a certain way. For example, it is common to assign the number 1 to an answer representing “strongly disagree” and the number 5 to an answer representing “strongly agree.” Whatever the answer may be, “strongly disagree,” “not very important,” lower numbers usually symbolize this type of feeling or reaction. That is why there is an advantage to including numbers on the questionnaire. Numbers can help respondents follow along when answers include a lot of response options. This is beneficial to the respondent and the personnel because it ultimately makes both of their jobs easier. Questionnaire designers must be careful, however, to ensure the questionnaire does not become so cluttered with numbers that it becomes difficult to read or follow. Ultimately, the designer needs to determine which presentation provides the most helpful, appealing information.
     I believe the reading does a great job of not only explaining, but showing, the thought process that happens when compiling a survey. As Austin & Pinkleton pointed out, designing a questionnaire goes beyond the writing of the questions. The entire layout from the answers to the presentation plays a part in contributing to the questionnaire. You want the respondent to have a smooth experience from start to finish with no uncertainty or discomforts. Therefore, it is essential to pay attention to the things that seem obvious, because that will allow for an uncomplicated occurrence. I found a poll story online about President Obama’s recent speech. It was not asking whether or not you agreed with what he was addressing, but whether or not you had a desire to go. I found this questionnaire to be poorly put together because it is not just a “yes” or “no” answer. Perhaps some students had class and wanted to go, but were not able to, or they did not get approved to buy tickets. There are several ways to answer this question and I think using a number scale, or even an open-ended question would have been a better way of going about it.

Read Full Post »

Older Posts »