For this assignment I chose the article “Channeling Science Information Seekers' Attention? A Content Analysis of Top-Ranked vs. Lower-Ranked Sites in Google”. The study examines search engines to see if they are biased towards showing certain types of links and information higher up in the list of results. In the paper they specifically targeted results from the queries regarding nanotechnology.
With an automatic program they collected data once every week, for 60 weeks, from the american version of Google (www.google.com) by submitting a search query including the word nanotechnology and a word representing different categories, e.g. “nanotechnology AND environment”. They then selected one week from every month at random and collected the first 32 links. The final database consisted of 9120 parent links and 224,987 child links. The program then tracked the frequencies of root words, e.g. “security”, “toxin”, “energy” etc., that then represented a theme. This is used to determine what the link was about.
The benefit of using an automatic process that collects the links and quantitative data is, of course, that it is easier to collect large amounts of data. Large amounts of unbiased data will often be more accurate and is therefore preferred. The limitation of this method is the control of the data that is collected. It’s hard to be certain that these themes and root words are accurate enough to represent reality. Though the sheer amount of data can often compensate for these inaccuracies. The method in this paper could always been more accurate by collecting more data and use more search queries and root words to divide the themes more finely. But somewhere we have to draw a line in order for something to be done.
IEEE VR 2012 - Drumming in Immersive Virtual Reality
I feel that the authors of the article does a good job complementing the data from the scales in the questionnaire and the movement data. With this quantitative data they were able to rule out a lot of different interpretations and explain their main point in a logical and clear way. Even though they said that they had a semi-structured interview with every participant I could not find that they used it for anything important in the text. I think that since the article concerns a lot of stereotypes it was a smart move to lean towards using the quantitative data more. Since if the quantitative data is collected and interpreted in a good way it will show results in a very concrete and unbiased way.
Quantitative data is also a good tool for researchers to use if they want to take a step back from their own interpretation of the scene and just look at the numbers. Enough quantitative data therefore often allows for generalisations to an entire population. One disadvantage with quantitative data is that it does not tend to explain why something is done or why we perceive things in a certain way. Qualitative data is often much better when we want to understand how and why we feel, react and perceive something in a certain way. Qualitative data is also very good to use when we do not exactly know what we are looking for. Loose and descriptive answers could potentially lead to a bigger understanding of the underlying cause. One disadvantage of qualitative data is of course that it is very subjective and that must be taken into account when doing surveys that deals with this method.
References:
Channeling Science Information Seekers' Attention? A Content Analysis of Top-Ranked vs. Lower-Ranked Sites in Google
No comments:
Post a Comment