Feeds:
Posts
Comments

Archive for October, 2009

Coleen Dickman
019:169 Public Opinion
29 October 2009
Reporting on Public Opinion Polls
Public Opinion polls can be a gift, or a curse. If a journalist is investigating a specific theory and finds a poll highlighting his/her investigation it can help the story incredibly. However, when reporting on polls there are a great deal of concerns. First, it is crucial that the journalist not succumbs to his/her desire for “breaking news” and over exaggerates the results. In many public opinion polls, the results aren’t as “shocking” or “unbelievable” as journalists claim. Often times, there are “dramatic graphs that grossly overstated the data which they were based” or “the tendency to grab for the biggest headline possible”. Other times journalists can misinterpret the data they receive, mislead their audience by leaving out crucial facts, or fail to include further background on the subject. Although there are a number of concerns surrounding public polls, it’s important to recognize their worth in media today.
In an article titled, Poll: Public losing trust in President Obama, Andy Barr reports on the data released in a poll taken by Public Strategies Inc./POLITICO. The poll asked respondents their opinion of several important policy issues, their approval rating of the government and Obama, and other current issues including the federal stimulus package and health care reform. This article is a good representation of accurate poll reporting for a number of reasons that were highlighted by Asher, Gawiser, and Witt. Initially, Barr starts the article by addressing who conducted the poll, in this case POLITICO, which is crucial to know because a biased poll has a much higher chance to misrepresent public opinion. Other background information Barr reports includes informing the audience about the poll itself, “The poll is based on 1,000 online surveys of registered voters conducted July 9-12 by Public Strategies Inc., a business advisory firm based in Austin, Texas, in conjunction with POLITICO, and has a 3.1 percentage point margin of error,” (Barr). This information alerts the reader to the dates that the survey took place which could explain if public opinion has changed since then, or if there was a specific event that could have influenced approval rating. In POLITICO’s instance, there was a great deal of concern and publicity over the economic downturn and federal stimulus packages. This time period would explain why there was further attention given to the poll results concerning economic issues. Barr capitalized on the public need for economic news and chose to expand on a question concerning a federal stimulus package. By picking what he deemed important, Barr was able to focus his article accurately without overwhelming the audience.
The article doesn’t flood the reader with irrelevant information, but instead selects pertinent figures to represent the poll as a whole. In other words, Barr reports that Obama’s approval rating relating to his “leadership on several key issues has fallen below 50 percent”. Rather than reporting public approval rating on all of the issues polled, Barr chose to highlight this statement by reporting on the single issue of health care reform. Barr wrote, “Just as Obama intensifies his efforts to fulfill a campaign promise and reach an agreement with Congress on health care reform, the number of Americans who say they trust the president has fallen from 66 percent to 54 percent. At the same time, the percentage of those who say they do not trust the president has jumped from 31 to 42.” Not only did he report the results of a specific issue, but he also was able to use other polls to compare the change in public opinion. This change is often most interesting in polling data and by reporting a change Barr inevitably expands his story to include a drop in Obama’s overall approval rating, “Obama’s personal approval rating has fallen below 60 percent in a number of recent major polls, and according to a Washington Post/ABC News survey out Monday.” Barr did not restrict his reporting to only one poll which is a mistake many journalists make. However, by including ABC’s results, Barr’s story is able to report another angle, adding to its degree of ‘newsworthiness’. Another important, but often overlooked factor in reporting polling data is using round numbers. Data is often given to the decimal point, but by rounding Barr gives the impression on more accurate data and distracts from the 3.1 margin of error that could be focused on.
Overall, Barr was able to accurately read the poll data without jumping to conclusions about making breaking news. He was able to manipulate the data to become more “user-friendly” and increase the readability of his article without losing supporting evidence. The most important conclusions made from the article can be attributed to the choices Barr made when selecting what poll results he was to use to represent his ideas. Therefore, the most important concept to understand regarding polling articles is not the specific results, but what the reporter is asking the reader to focus on. In the case of the POLITICO article, Barr chooses to highlight Obama’s drop in approval ratings and contributes this opinion to what he believes is most important, the economy. It’s important for the reader to acknowledge this fact because there may be multiple other reasons behind a change in ratings, but Barr chose to report his alongside the economic downturn.
(Article link: http://www.politico.com/news/stories/0709/25189.html)

Advertisements

Read Full Post »

Jordan Underwood
Dr. Yao
MMPO
October 29, 2009
Reporting the Polls
Journalists loved numbers. Numbers are what lead to facts, information, and poll results, however, reporting the polls is a whole different story. Numbers can tell a journalist all kinds of things but when it comes to relaying them on to the public, it becomes a more difficult task. The journalists are aware of the different implications as well as what the numbers mean but the everyday reader would have no idea unless they were a part of compiling those numbers for the story. Therefore, there are several things to remember when reporting the polls. They are the basic steps for reporting polls, the context of the numbers and finally the media effect of the polls.
Polls always look good when they are first gathered. All the numbers are there and the data has been accurately gathered for the poll. Everything has been checked and rechecked to ensure as much accuracy as possible. Finally, one of the most difficult tasks that the journalist must face is turning those numbers into a story that can be reported accurately. The polls can be the best in the world, however, if they are interpreted wrong, the story can turn out poorly and make the entire publication or news staff look bad. A journalist’s job is much different than that of a researcher because they have to look for stories inside of the polls rather than just information. They have to return to the original poll or questionnaire and analyze each question and tabulating the responses. This often takes numerous amounts of hours, however the journalist is usually on a deadline of one or two days so they often have less time than a researcher would. It takes dedication and hard work and the only way to accurately report a poll is to spend time rechecking the numbers. After all this is completed and the journalist feels like they have analyzed and accumulated the interesting information from the data, it is time to write the story. There are eight key things that come with writing the story. One is what the lead going to be. This is not only the beginning paragraph, but the most important piece of data that was found in the poll results. The second main idea is that journalist must remember the wording of the questions. Remembering the wording allows for the questions to be accurately portrayed in the story and therefore would lead an accurate representation of the data. The third point is what is in the second paragraph. Not only does this mean the second paragraph, but it tells the reader why they should care about the story. The fourth main idea is that all the facts should be included in the story so the readers do not question its validity. Fifth is to remember to add technical details such as how people were interviewed, who interviewed them, who sponsored it and what types of sampling error there may be. The sixth main point would be not to cram everything into one story. Often times poll stories can an overload of information and facts that that turns people off to reading it. Therefore cutting down on the story will be beneficial. The seventh idea is to never use decimals within the story because it implies exact precision and everyone knows that does not exist. The final idea is graphic displays are worthwhile because they can explain the poll results in a quick way so everyone can understand the story in more than one way.
Reporting the polls takes a great amount of time and effort, but the context of the numbers is what gets them to the final conclusion. All polls are not the same, which means the contexts of the numbers are always going to be different. In other words, it is a major error to assume polls are similar because even though they may have similar titles or even similar results, they are not the same and reporting them as if they were the same would cause a decrease in credibility. Polls are also spread across different periods of time, and many times the time gap is large, so in this case the numbers have to be analyzed in a special way. First it has to be decided if a similar poll has been done before. If it has, than the poll that the journalist is currently reporting may have some validity. If the poll results are greatly different however, it can be the first major warning sign that the poll was done wrong or some error has come about in the polling process. Therefore, a compare and contrast of the polls is an excellent idea. Another major technique journalists use in regards to the context of numbers is averaging the polls or grouping them together. This often results in the benefits of reducing the impact of a single poll as well as putting a halt on the change in numbers. Problems can arise if this technique is not done carefully. Averaging may first lead to problems because not all the polls are done the same or correctly. The second major problem is it can change public opinion very quickly. For example, if the average voters’ views are changing, the journalists will not immediately report the changes.
Newspapers and the media are the two main sources that Americans get their information about polls, so their influence is very large. Therefore, they need to live up to certain standards that will provide the readers or watchers with confidence that what they are reading is accurate or as accurate as it can possibly be. The standards are different for each surveying group, however, it is clear that most companies require a certain number of standards when dealing with the disclosure of polls. These standards are often set in place because media reporting of polls may not always be accurate. For example, some companies require the reporting who conducted the survey and who sponsored it. Another example is that companies will sometimes disclose a description of the sample design so the public can clearly see which types of respondents were selected. Overall, these standards are put in place because polling companies want to deflect blame if the poll is reported wrongly. The effectiveness of these standards is less than an average citizen would hope for or notice, however it has lead to a greater improvement in the reporting of the polls. One of the major ways the effectiveness of these standards is not very high is because the company who sponsored it and the company who conducted it may be different. This can lead to major discrepancies in meeting the recommendations because often times the polls are coming from a different source than the one reporting it. Another main reason the standards may be less effective is because they do not report the technical differences and this can affect the results of the poll data. The analysis that follows these standards is that news sources often place too much emphasis on the polls and they are even said to have started to create the news rather than report it. This is why many major news reporting companies have started to create their own polling capabilities to prove that what they are reporting is accurate. Also, news reporting companies have shifted towards placing emphasis on the stories about the polls and including the polls in their analysis so as not to show that they only value polls.
Dealing with polls is not an easy issue. It is hard as an average reader to completely comprehend everything that the pollster is trying to get across to the consumers and it is hard to tell whether or not the poll is accurate or fabricated. In my opinion, most media sources on TV that report the polls to the public have certain biases and those biases are apparent in their polls. They manipulate the polls to show what they want and instead of reporting a story with a poll attached as evidence to support the story, many news sources often make the entire story about the poll. Not only does this complicate the comprehension of polls, it also can bring out hidden biases because in order to report a poll, theories have to be presented and everyone, regardless of skill, has different theories. In my experience, even though every news station says that they don’t have any political affiliation or are “fair and balanced,” most of them are the exact opposites. Everyone knows that Fox News supports the Republican Party where CNN is usually supports the Democrats, so it is hard to take their polls seriously. In my opinion, news media sources should work harder at presenting stories with polls attached to them as proof instead of the other ways around. Poll Stories are excellent, don’t get me wrong, however, they are way overdone and need to be revamped to appease the consumers rather than the news station or a certain political party. It has improved in the last couple of years, however I feel there is still much more to be done in order to get the trust in media over polling back up to where it should be.
A point that has been reiterated throughout these required readings is that a poll reporter’s job is not easy. There are many different ways they can succumb to failure and they always have to be on their A game when it comes to reporting polls. Even when the reporter feels that they have reported everything to the best of their knowledge, some people will still find fault with that they are reporting. In a recent poll conducted by ABC News, they found that now fewer than 20% of Americans refer to themselves as republicans. This did not sit well with some Republicans especially former House speaker Newt Ginrich. He labeled this poll as a travesty to all Republicans because none of the other polling companies found similar statistics. He also noted later on that it was a ploy by liberal news media sources to slant the world to more widely accept the Democratic viewpoint. This was taken into immediate consideration by ABC and this poll story that I read addressed Ginrich’s problems. They clarified that they were not the only one who found these results because many other news sources found that a range from 18-27 labeled themselves as Republicans and even though he is entitled to his opinion, he was mistaken. Overall, this shows that poll reporters’ jobs are much more difficult than people consider and even when they report the polls to the best of their ability, someone will find fault with their findings.

Read Full Post »

Julie Nakis
Reading Reaction #2

Manipulation of the Media

The readings for today dealt mainly with how the media uses polls to report stories and potentially create stories around a specific poll. Asher discusses how newspapers, television, and the Internet are the major sources of where Americans can learn about recent polls. Because it is the media reporting these polls, most citizens do not learn about polls through the organizations that sponsor polls but rather through different media outlets. One challenge that Asher discusses is how the media does not voluntarily have to comply with standards for polling set by organizations like National Council on Public Polls, The American Association for Public Opinion Research and the Council of American Survey Research Organizations. These organizations created codes of conduct that specify standards for disclosure of how a poll should be conducted. The principles of disclosure are mainly created for the organizations that are sponsoring and making the poll so Asher argues they aren’t as effective as they could be if the media had to abide by these rules. Consumers of public opinion polls should be aware of how the media uses these polls to push a certain agenda or just to make their report newsworthy. News media are critiqued for turning a poll into a position of prominence so that the polls become an issue or topic for a news report. This relates to Gawiser and Witt’s reading for today because they believe that journalists will grab a poll and make a story out of it if they believe it will make large headlines with a good story. Journalists can take numbers from a poll and make it into a readable story for the newspaper or on air. Asher also criticizes the media for taking certain items from polls and leaving other items out. For example, he talks about a study on abortion where the three questions asked were worded very differently and it led to conflicting results. These conflicting answers allowed different media outlets to use each question and spin it to write a completely different story on abortion both for and against.
I found today’s reading very interesting because this summer I worked at a Chicago local television station and was able to see first hand how the media manipulates polls for a good story. In July, a poll was released saying that 55% of people were more likely to buy generic brand foods rather than name brand foods to save more money in the declining economy. A reporter that I worked with decided to take this one poll and turn it into a two part special on television. She conducted taste tests on generic vs. name brand foods and then took some polls of her own to validate the results. According to Gawiser and Witt, if there are two polls done with similar results there is greater confidence in the poll.
I found an article online at Glamour.com that was reporting on the upcoming shopping season. This journalist’s main point in the report was centered on the release of a new poll done by Deloitte Accounting firm. This poll found that 51% of consumers would be spending more money this upcoming holiday season and that they expect to spend a 16% increase from last year. After reading the chapters for today’s reading I realized that this journalist probably used this poll and created the whole report for this magazine website around the poll. This website is a sight for shopping and fashion so it is easily noticeable that they are trying to push an agenda for consumers to go out and spend money by shopping. The journalist’s hopes were most likely for the poll to validate their argument to go out and shop. I did some more research on this poll by Deloitte to see if they had asked any other questions that the Internet just didn’t release. I ended up finding the full survey online at CNN.com. This journalist ended up taking a different spin on the article and wrote that an item in the poll claimed that the average number of gifts bought this season was less than the previous year. I found it interesting that the article written for a fashion website was taking certain items from the poll and putting a positive and capitalistic view on it where the article for CNN was using this poll to prove that there are still some problems in our economy.
In the future as I am reading polls I will pay attention if the reporter is sharing all items from a poll or if it appears they are only using selective items to help benefit their story.

Read Full Post »

Keely Jarvill
Qingjiang Yao
Introduction to Public Opinion
27 October 2009

One of a journalist’s jobs is to provide vital information to the public of a poll with no biases or useless information. This aspect may be harder for journalists than society thinks. Gawiser and Witt explain in “Reporting the Polls: The Basics” that there are many components a journalist needs to know about before publishing a poll. Some of these aspects include: looking for news in the poll, writing the story accurately, and providing graphics in the story. Gawiser and Witt state that writing a poll story is difficult, but it can be accomplished with the right techniques.
Gawiser and Witt first explain that the lack of accurate poll stories is because journalists do not fully understand how to interpret the polls accurately and that they need to have the story developed fast with conflict and objectivity, which in turn might over exaggerate the poll story. One of the first points that Gawiser and Witt make is to do not include sensationalized stories into your poll results. Journalists try to grab for the biggest headline in a story and that can lead to sensationalized story. They also state that a journalist should be careful about writing the story because many polling firms have already analyzed the data for the journalist, which is data they think is important and might not be news. Also, adding graphics to a poll will catch the reader’s eye and make them understand more.
Gawiser and Witt than explain to a journalist the techniques that need to be used when writing a poll story. They explain that every news story needs a lead because it explains what is important to the story. Wording of the question is important because you want to keep it clear and not confusing to the reader. The second paragraph must tell the reader why they should care about this poll. Remember to add all the facts in and give techincal details because it will give a reader a better understanding of the poll, but don’t include too many details because it makes the story confusing. Finally, don’t add decimals because it gives the poll story more of a margin of error.
I agree with Gawiser and Witt’s stances on how a poll should be written. Actually, reading this selection provided vital information to my learning because it taught me the components of writing a poll story, which I had no knowledge before. I also agree that many journalists do try to pump up a poll story by providing many statistics and making it sound more important than it actually is because they do not understand the concept.
The Washington Post did a poll story on how many people trust the media and the poll stories released on television. Fifty percent of the Democrats trust the news and polls released on television news cast, while Republicans trust only thirty one percent. Internet wise, forty one percent of democrats trust the media and polls, whereas thirty four percent of republicans trust the media. This may have to do with the way journalists write the poll story because of the fact that journalists tend to be more liberal rather than conservative.
When reading polls now, I have learned that I need to understand that it is journalists writing the poll story not the polling company or experts. A journalist might not completely understand the full concept of the data or the poll itself and may include errors and bias in the story. Also, many polling companies prepare the data for journalists so the poll might not be news or sensationalized. Gawiser and Witt explain to a journalist how to write a good poll story and how to catch a reader’s eye to make them fully understand more. Even though writing a poll story for a journalist might be difficult, it is obtainable.

Read Full Post »

David McNace

Reading Reaction

October 28, 2009

          In Chapter 6 of Asher’s book, “Polling and the Public, What Every Citizen Should Know”, he discusses the relationship the media have with public opinion polls. The relationship between the two can be difficult and complex because of the various intentions of the media, but can also be of great benefit to the public if the media recognize the use and limitations of the poll. Asher discusses a number of requirements for polls that are set by the National Council on Public Polls, The American Association for Public Opinion Research and the Council of American Survey Research Organizations. These organizations set standards for disclosure of poll data, but Asher shows that these standards provide less protection than is expected. The poll standards apply to the survey organizations, but don’t apply to the news organizations that are covering the results.
In many situations, news organizations will only use parts of the survey that help advance the angle they are attempting to portray. Asher uses the example of poll data from abortion questions to show that journalist could write completely different stories while still using the same polling data. Asher also notes that reporting sampling size and sampling error are two important features in surveys. Asher used a number of political scenarios as examples for this, showing how approval ratings don’t always tell the true story when the figures still are within the margin of error. In order for survey data to have more validity, the data needs to be represented in the same wording and question order that they were asked in because those factors can have a much greater effect on the responses than the sampling error does.
          In the reading for today from Gawiser and Witt, the authors discuss how a poll should be just another source of information, a source with limitations, potential problems and biases, but a valuable source nonetheless. Many journalists lack the basic understanding of polling to interpret the numbers accurately and the journalistic values of speed, freshness, conflict and objectivity conspire to push poll stories too far. When a journalist looks at a poll they need to make sure they look for the news in the survey results to figure out what is new or what has changed. Correctly reading poll data can prove to be tricky because journalists may have a hard time finding validity from consumers if the polls they are analyzing and putting into print doesn’t align with other polls in the same category. As consumers if we see that there is a consistency in validity among polls, then the results appear to be more professional and thus we trust what the poll is telling us more.
          After reading Asher’s discussion on how journalists can write different stories depending on what the data shows, I found a poll from pollingreport.com that discussed the thoughts people have on abortion. In the poll 41% said that abortions should be generally available, 35% said there should be stricter limits, 20% said that abortions should not be permitted and 4% said that they were unsure where they stood on the issue. From this data, I can see how the results could produce a wide variety of stories depending on the angle a journalist chose to take. One journalist could say that the survey shows that only 20% of people are against abortions and that the majority of people are not opposed to abortion rights. Meanwhile another journalist could also make the case that a majority of people don’t agree with the availability of abortions today. These two opposing views are both viable opinions to have but don’t necessarily portray the true story of what the poll is stating.
          From the reading for today, I learned how important it is to completely understand what a poll is saying when you are covering it for a news organization. The data can be represented in a variety of ways that may not show the entire picture of a particular poll. As a journalist it is important to be able to analyze a poll and decipher what the poll is trying to say to accurately report it to the general public. Another thing that I realized after reading these chapters is that at times when journalists don’t know exactly what a poll is saying, they sometime misrepresent it in the news media. For example, when presidential approval ratings decline by a few percentage points, there is a big news story about it. However, in reality the decline still falls within the poll’s margin of error and makes it much less newsworthy. The ability to read a poll is of the upmost importance for journalists to accurately inform the public.

Read Full Post »

Abby Sojka

10/29/09

Professional Standard for Reporting Polls

Yao – Media Topics

 

 

“From a journalist’s perspective, a poll should be just another source of information, a source with limitations, potential problems and biases, but a valuable source nonetheless.” (Gawiser & Witt 103). This is a clear generalization of what a poll should be; a poll should be informative with room for disagreement. According to Asher, “Newspapers and television are still the major sources of what Americans learn about polls…most of what citizens learn about the polls does not come directly from the reports prepared by the organizations sponsoring the polls.” In order for a poll to be a valuable source, the consumer should be able to understand and distinguish an opinion of the quality and validity of the polls. However, there are numerous ways in which a poll can be distorted or pre-biased. There is the idea of quantitative polls that have low error, and there are poorly created polls that can have severe error and be highly biased. When journalists analyze polls, they look for the news in the survey results. Journalists look at “what is new, what has changed, what is surprising, what has the most support among the public – or the most opposition,” (Gawiser Witt).  In a sense, being a professional journalist that analyzes poll data can become highly tricky. A journalist may have troubles receiving validity from consumers if the polls they are analyzing and putting into print do not align with other polls in the same category.  We, then as consumers must decide if what the journalists conducted polls at the same time as other polls that were published, the population of the polls, the accuracy of the polls, and the time frame of the polls. If polls conducted by the media, per se, and the poll is in sync with others published at the time concerning the same topic, then there is a higher consensus that the poll has greater validity. If there is a consistency in validity among polls, then the poll results and data seem more professional, and are read at a greater understanding by the public.

 

In the media, there are several tabloid magazines that each includes different opinions in polls; anything from “who wore it best” to “who is dating who.” Popular tabloid magazines such as People Magazine or US!Weekly are great examples of non-agreeing polls.  Each of these popular tabloid magazines have weekly polls that dictate what they will print in next week’s volume.  Every week on the cover, both magazines may have the same topic or issue printed on the front, but they will have different stories to tag-a-long with the celebrity who is being taken advantage of.  The polls in the magazines may ask the reader what they think with a celebrity’s new girlfriend, haircut, or recent drug problem. The consumer that reads the magazine may believe what they are reading and answer the polls according to their biases. The consumer reading the other magazine may find that the results are completely different. The other tabloid magazine may be publishing polls that say the celebrity has been recently engaged, while the other magazine may say that the celebrity is dating someone completely different. The public then begins to wonder where the magazines are getting their information; is the information biases? Where are the surveys taking place? How long of a time period did the survey run? How accurate are the consumers being surveyed and how educated are they on the topic? It is possible that the people being surveyed may even work for the magazine and only filling out the surveys to develop a bias and a liking for the magazine from the consumers. It is never easy to decipher where tabloid magazines collect their data from in order to publish their surveys and opinions.

 

I personally read polls and surveys on a daily basis whether it is from a University survey, watching CNN poll results, or simply asking girls in my sorority their opinion on certain issues.  I notice that when I see similar results coming from the polls, I feel that comfortable in educating myself about that topic. I feel I am getting a good sense of public opinion. When I notice that there are disagreements in polls being published, I question whether or not the polls have validity, quality, and biases. When polls are consistent with one another, whether I am trying to find what type of running shoe last the longest and have the highest quality, I tend to believe the polls and consider it to have high validity. When polls are not consistent, when dealing with a product or something that I find important to me, I may feel uneasy about the product and wonder if the polls are simply biased and unorganized.

 

There are many ways a journalist can go about being professional when analyzing and reporting polls, and they are many ways a journalist can report the polls with a full bias and misinterpretation.  It all comes down to how the journalists reports, and how the consumer deciphers the reports and analysis’s.  The polls that are consistent tend to created public knowledge and understanding. “Almost every poll can be understood more completely when other polls on the same or similar topics are used  to highlight the important results and to give texture to the shape of opinion over time,” (Gawiser Witt 118).

Read Full Post »

Alyssa Mattero
Mass Media and Public Opinion

Never-ending Criticism for Media Information
This chapter in Polling and the Public brought many of the concepts we have learned this semester together. It focused on how to analyze and interpret a poll’s results. In general terms, this chapter explained how you still have to be critical.
Part of this chapter focused on examining trends over time. A poll taken ten years ago may have produced different results than a poll taken five years ago even if the wording and order of the questions were identical. This happens because different aspects of society change over time. Whether the survey asked about foreign policy, healthcare, or the television shows you watch, all of those items are subject to change over time and therefore peoples’ opinions about them could also change.
Another really important part of this chapter explained how people interpret polls. Two people can look at the identical sets of poll results and interpret them in completely different ways. This happens because of a person’s personal beliefs, values and purpose for analyzing the poll. This part of the chapter reminds me of the metaphor of the half empty or half full glass. One person can look at a glass and say, “That glass is half full.” Another person can look at the exact same glass and say, “No, it’s half empty.” This shows that different people can view results differently and neither one is scientifically wrong. It’s all a matter of opinion and viewpoint. This phenomenon can happen when people view poll results and therefore can lead to confusion.
It’s interesting how we use the media in society. Sometimes the media plays the role of the scapegoat. It takes the blame for problems like eating disorders, obesity and underage sex. But on the other hand, the public, and many professionals such as journalists, use it as a tool and a source of information. Despite the intentions or predispositions, information from the media must always be taken critically. This chapter in Asher seemed very conclusive in a way because it discussed the final steps in poll taking. Although, it still focused the bias found in polls and how to distinguish the credible from the inaccurate.
This chapter also talked about choosing items to analyze and present. Compnies and writers have the option of including only certain items of a poll in order to support their argument. This creates bias and misconceptions about the results. For example, a poll from the Pew Research Center Survey asked, “”Do you strongly favor, favor, oppose, or strongly oppose requiring that women under the age of 18 get the consent of at least one parent before they are allowed to have an abortion?” If I was writing an article about abortion rates rising I may be tempted to include that only 8 percent of people oppose abortion. But this information is misleading because although 8 percent of people polled chose “strongly oppose” another 11 percent chose “oppose”. So although the information taken from the poll is correct, the writer is purposefully leaving out some information to support her argument rather than objectively delivering the information.
The chapter concludes with the idea that it’s really up to the consumer to figure out which polls are credible and how to interpret them correctly. As a pollster, there are definitely initiatives you can take to make your poll clear and accurate but there could always be contradictory information against your findings. I believe the best way to show readers that your poll is the most accurate is to present it the right way. It’s important to include as much information as possible. It is also a good idea to talk about the bias or extraneous factors that could have influence your poll rather than hide that information.

Read Full Post »

Older Posts »