The Role of Communal Ratings as Cues in Participation in Political User-generated News Websites

El papel de las valoraciones como señales en la participación en los sitios web de noticias políticas generados por el usuario

O papel das valorações como sinais na participação dos sites de noticias políticas gerados pelo usuário

Alcides Velasquez1, Cliff Lampe2

1 Pontificia Universidad Javeriana, Bogotá, Colombia.

2 School of Information, University of Michigan, USA.

Recibido: 2013-06-05
Envío a pares: 2013-06-24
Aprobado por pares: 2013-09-12
Aceptado: 2013-09-21

Para citar este artículo / To reference this article / Para citar este artigo

Velasquez, A., Lampe, C. Diciembre de 2013. The Role of Communal Ratings as Cues in Participation in Political User-generated News Websites. Palabra Clave 16 (3), 701-728.


Citizen journalists can participate in political user-generated news websites in différent ways, including providing content, discussing with other users, and rating the content posted on the site. Also, users of these types of sites form impressions of other users based on the information provided by different sources. The information supplied by rating systems, for example, constitutes a source of information that cues certain characteristics about others. However, user-generated news websites' rating systems usually evaluate users' participation without distinguishing between types of participation. Taking into account the origin of the information carried by the cues and the communal dimension evaluated in the process of impression formation, this study examines how a set of different rating system design options might influence users' impressions of the credibility of discussants, discussion informativeness, and willingness to contribute to discussions on political user-generated news websites. The results of this study partially support the idea that communal third-party information has more importance when impressions are formed in these online settings, although further research is needed to establish the connection between rating systems and the communal characteristics of users.

Key words

Politics, journalism, Internet, news agency, audience measurement, cultural participation. (Source: UNESCO Thesaurus).


El periodista ciudadano puede participar en los sitios web de noticias políticas generados por el usuario de diferentes maneras, tales como el suministro de contenido, discutir con otros usuarios, y la calificación del contenido publicado en el sitio. Además, los usuarios de estos tipos de sitios forman impresiones de otros usuarios en base a la información proporcionada por diferentes fuentes. La información suministrada por los sistemas de clasificación, por ejemplo, constituye una fuente de información que da señales sobre ciertas características de otros. Sin embargo, los sistemas de clasificación de los sitios web de noticias generados por el usuario suelen evaluar la participación de los mismos sin distinguir entre los tipos de participación. Teniendo en cuenta el origen de la información transmitida por las señales y la dimensión comunal evaluada en el proceso de formación de impresiones, este estudio examina cómo un conjunto de diferentes opciones de diseño para un sistema de clasificación puede influir en las impresiones de los usuarios sobre la credibilidad de los ponentes, el carácter informativo de la discusión, y la voluntad de contribuir a los debates en los sitios web de noticias políticas generados por el usuario. Los resultados de este estudio apoyan parcialmente la idea de que la información comunal de terceros tiene más importancia cuando las impresiones se forman en estos entornos en línea, aunque se necesita más investigación para identificar la relación revelar entre los sistemas de clasificación y las características comunales de los usuarios.

Palabras clave

Política, periodismo, Internet, agencia de noticias, medición de audiencia, participación cultural. (Fuente: Tesauro de la UNESCO).


O jornalista cidadão pode participar dos sites de noticias políticas gerados pelo usuário de diferentes maneiras: fornecendo conteúdo, discutindo com os outros usuários e qualificando o conteúdo publicado no site. Além disso, os usuários desse tipo de site formam impressões de outros usuários com base na informação proporcionada por diferentes fontes. A informação fornecida pelos sistemas de classificação, por exemplo, constitui uma fonte de informação que dá sinais sobre certas características de outros. Contudo, os sistemas de classificação dos sites de notícias gerados pelo usuário costumam avaliar a participação. Ao considerar a origem da informação transmitida pelos sinais e pela dimensão comunal avaliada no processo de formação de impressões, este estudo examina como um conjunto de diferentes opções de desenho para um sistema de classificação pode influenciar nas impressões dos usuários sobre a credibilidade dos informantes, o caráter informativo da discussão e a vontade de contribuir para os debates nos sites de notícias gerados pelo usuário. Os resultados deste estudo apoiam parcialmente a ideia de que a informação comunal de terceiros tem mais importância quando as impressões se formam nesses ambientes on-line, embora se precise de mais pesquisa para identificar a relação entre os sistemas de classificação e as características comunais dos usuários.


Política, jornalismo, internet, agência de notícias, medição de audiência, participação cultural. (Fonte:Tesauro da Unesco).

The Role of Communal Ratings as Cues in Participation in Political User-generated News Websites

The transformation in media technology has brought a diverse set of changes to the practice and routines of journalism (Carpenter, 2008). Among these changes is the appearance of online citizen journalism. Citizen participation in the production of news in digital media offers the possibility of setting alternative agendas (Domingo et al., 2008) and diversifying news topics and sources (Carpenter, 2010). Online citizen journalism is defined as including participatory features (e.g., news production), participation in discussions through comment posting, and the use of social features such as content rating (Goode, 2009). This study examines how information provided by systems that rate different forms of citizen participation in user-generated news websites influences the perceptions of online discussions and the users in those discussions. It is limited to political user-generated news websites that are designed to provide information on political issues and to discuss them online.

Political user-generated news sites (e.g., newsvine or guerrilla news networks), although focused on news, share many of the characteristics of the broader set of online communities. They allow for interaction among individuals through activities such as posting content, providing feedback, commenting on the contributions of others, and building social relationships. Online communities are characterized as groups of people who assemble with a shared purposeor interest and are guided byasetofp olicies that construct a set of conventions and norms for behavior, all facilitated and supported by an online application (Preece & Maloney-Krichmar, 2005).

As a genre of online communities, user-generated news websites have a set of rules and incentives for audience participation, such as reputation or leveling systems, which can motivate news production or discussion through comment posting (Domingo et al., 2008). These systems allowusers to assign scores or to rate other users' participation, and also are helpful in guiding users on what content might be worthwhile to consider and how to overcome information overload (Lampe & Resnick, 2004).

Although these systems have proved to be useful for such purposes, feedback and rating mechanisms have another effect in online communities. Users of these sites employ several sources of information to form impressions about other users (Walther, 1992). Social information processing (SIP) theory predicts that, given the limitations of computer-mediated environments in communicating non-verbal cues, it takes longer to form impressions of individuals than in face-to-face (FtF) communication. However, while the limitations as to the amount of social information may slow the process of impression formation, these impressions are formed anyway, even in the most cue sparse online environments. The information provided by rating systems constitutes a source of information that cues certain characteristics about other users, and may facilitate impression formation by adding additional social cues to the online environment.

Research into online impression formation shows the source of information used to form impressions plays an important role in how those impressions are constructed (Walther & Parks, 2002). Furthermore, findings by Utz (2010) suggest the effect of the source of the information varies, depending on which dimension for judging others (i.e., agentic or communal) is more salient when the impression is in the process of being formed. When communal characteristics are judged, third-party information has a stronger effect.

As mentioned previously, users of user-generated news websites can contribute to these sites in different ways and with different types of content. However, rating systems usually provide information about users' contributing behavior without distinguishing between the types of contribution. Using experimental methods, this study looks at the differences in effects between information provided by rating systems on participation through discussion comments, participation through posting news articles, and general participation in terms of users' impressions of the credibility of other users, perceptions of the informativeness of the discussion taking place, and intentions to participate in the discussions.

Online Cues and Impression Formation

Research on impression formation in computer mediated environments can be described as following two different paths. On one hand, there is the cues-filtered-out approach based on theories such as those on social presence (Shorts, Williams, & Christy, 1976) and information richness (Daft & Lengel, 1984), which argue that online communication is socially impoverished because online environments hinder the formation ofimpressions (Tanis & Postmes, 2003).

In contrast, theories such as those on social information processing (Walther, 1992) state the process of impression formation still takes place based on the cues available to online users (Tanis & Postmes, 2003). Evidence suggests the type of cues communicated online do influence the development of impressions and, furthermore, such information also has an impact on individual intentions to develop further interaction with others (Tanis & Postmes, 2003). Accordingly, impression formation in online environments, such as discussion forums and online communities, is based on the cues that users make available voluntarily and involuntarily.

However, the source of these cues also influences the process of impression formation. The warranting theory states that third-party generated information about a target can be more reliable for individuals, since it is less prone to being manipulated (Walther & Parks, 2002). For example, if a user claims tobe a political expert, this will carry less impression weight than if another person says the user is a political expert. Two dimensions forjudging come into play when individuals are forming impressions about others. One is the agentic dimension, which is related to the self-interest of the possessor of that quality; the other is the communal dimension, which has to do with the interests of people with whom the possessor of that quality interacts (Abele, Cuddy, Judd, & Yzerbyt, 2008).

Research by Utz (2010) sh ows the effect of the information source varies, depending on the dimension that comes into play when judging others. There are findings that suggest other-generated information has an effect when communal characteristics are assessed. When individuals are evaluating someone's communal characteristics, such as reliability or unselfishness, self-generated information does not have a major effect, while other-generated information does.

In this sense, when assessing the effect of cues on the process of impression formation, the origin of the information carried by the cue and the dimension play a role in the process. User-generated news websites are not exempt from this. Users employ the cues they are given to form their impressions of others, and shape their behavior based on these impressions.

Rating systems, which are a common feature of online communities, offer explicit mechanisms for users to provide impression cues of one another. In some cases, a rating system that provides general information about user participation might not report sufficiently on the communal characteristics of a user, while a rating system that distinguishes between different forms of participation might provide more solid information about such characteristics, thereby having a greater effect on the perceptions of individuals.

Reputation Systems

Online reputation systems have several purposes. On one hand, they help users to recognize what information might be valuable to them. They also are useful in providing information about users who have been of greater benefit to others or have been considered as providing better quality contributions to the community (Lampe, 2012, p. 81 ). These systems are a way to collect information and to communicate how users have judged the behavior and contribution of other users. However, they serve other purposes besides supporting the consumption of information or guiding decisions on social interactions. Since these systems indicate the value certain types of content or behavior has for the community, they also help to shape the norms users share about what types of content or behavior are adequate in that context. Moreover, the findings suggest these systems are a way in which users build their status online, and can become a tool for assessing users' own contributions (Velasquez, Wash, Lampe & Bjornrud, 2013). In this sense, reputation systems encourage user participation, given the meaning that highly rated content entails for them. Another stream of research has found evidence indicating reputation systems also shape the way in which users learn and are socialized in online communities (Lampe and Johnston, 2005).

However, very few studies have examined the benefits of systems that rate different aspects of content or provide information about different types of participation. One of these is the study by Lampe and Garrett (2007). They examined the advantages of a system that allowed participants to rate news content according to different dimensions of quality. The findings suggest individuals perceived a system that allowed them to evaluate a larger array of news attributes as being more accurate and satisfactory.

Yet, little is known about the possible effects a rating system that distinguishes between types of participation might have on users' perceptions and intentions to participate in the site. Such a study can contribute to current research on the effects of rating systems on participation in user-generated news websites and the process of impression formation in online political settings.

Discussion Comments on Political User-generated News Websites

A paucity of research has examined participation through discussion comments on user-generated news websites, including political news websites. Most studies have focused on the transformation that citizen journalism has brought to traditional journalism (Carpenter, 2008; Domingo et al., 2008; Mitchelstein & Boczkowski, 2009), the characteristics of the news produced by users (Carpenter, 2010), and the effects on users (Kaufhold, Valenzuela & Gil de Zúñiga, 2010) and citizen journalists (Robinson & Deshano, 2011).

However, participation in online political discussions has been researched more extensively. Some studies have found participation in online political discussion contributes to political participation (Nah, Veenstra, & Shah, 2006). Others have focused on how agreement and disagreement takes place, and the different factors that influence opinion expression (Kwak, Williams, Wang & Lee, 2005; Wojcieszak & Mutz, 2009), while there is research to suggest online settings can enrich the diversity of viewpoints (Kelly, Fisher and Smith, 2005; Stromer-Galley, 2003).

Some studies have examined how the features of online media might influence participation. For example, Ng & Detenber (2005) conducted an experimental study to examine the impact of synchronicity and civility on users' perceptions and intentions to participate. Their findings suggest synchronous discussions are perceived as more informative and persuasive than asynchronous discussions, although this feature did not seem to have an impact on users' intentions to participate in the discussion.

Ho and McLeod (2008) hypothesized, from a social psychological perspective, that features such as a reduced amount of cues and the anonymity of an online chat room discussion group would increase the likelihood of stating an opinion in an online context compared to discussion in a FtF context. Their findings suggest fear of isolation had a negative effect on willingness to express an opinion, but this effect was tempered by the type of communication to be used. Online interaction reduced the effect of fear of isolation.

Tan, Swee, Lim, Detenber and Alsagoff (2007) examined the role of cues (i.e., language style and expertise) in users' perceptions of discussants and intentions to participate in an online political discussion. The results did not indicate expertise cues had a significant effect on intentions to participate in the discussion. The effect on discussion informativeness and user credibility also was very limited. In general, the results suggest the status cues of participants did not have a significant effect on users' impression and on their intention to participate in the discussion. However, the same study had a limitation in that the level of expertise was manipulated with four types of information, without distinguishing the effect of each one. Specifically, expertise cues were manipulated using the time the discussant has been a community member, the number of posts contributed, the user's level in the community, and rating of the user's comments in the online community. The study did not assess the differences in effect for each of the sources of the cues, ignoring that each of them might provide agentic or communal characteristics.

For that reason, the present study examines how a reputation system that allows for rating different types of users' contributions affects individuals' perceptions of users' credibility, perceptions of discussion informativeness, and individuals' intentions to participate in discussion on a user-generated news website. It assumes a rating system that does not distinguish between types of contributions might not cue communal characteristics, while a system that distinguishes between different types of participation does cue communal characteristics, thereby having more of an effect on users' impressions and perceptions.

According to previous research, when individuals are forming impressions of others, they focus more on information that cues communal characteristics. The communal dimension is more valued and is more important for personal and group perceptions than the agentic dimension, and is related to function in social relations (Wojciszke & Abele, 2008). Furthermore, the findings suggest communal judgments are inferred from cues that provide information about whether an individual's behavior might benefit the perceiver, which will increase attitudes toward that person (Cislak & Wojciszke, 2008).

Therefore, the following research questions are proposed:

  • Research question 1: What is the difference in users' perceptions of the credibility of users who have received ratings on their general participation, their participation in discussion comments, and their participation posting news articles?

  • Research question 2: What is the difference in users' perceptions of the informativeness of discussion in which users who have participated therein have been rated on their participation in posting discussion comments, or their participation in posting news articles, or their participation in general?

  • Hypothesis 3: What is the différence in individuals' intentions to participate in discussions where discussants who participated have received ratings regarding their comment posting participation, or their news article participation, or their participation in general?



A total of 98 undergraduate students (33 women, 65 men) from a major mid-western university in the United States participated in this study (age M = 21.16, SD = 2.089). The students who took part were recruited from courses related to the information society, telecommunications policy, and media effects. They each received course credit for taking part in the experiment.

Experimental Design

The stimuli consisted of an article accompanied by a discussion thread. Both were written specifically for this study. The topic was the threats to "net neutrality" posed by the Google and Verizon agreement. Content was presented to participants as part of a user-generated news website. The discussion thread had four posts. The usernames of users who posted comments in the discussion were situated alongside each discussion comment. The rating for each user was expressed through a star rating system (see Appendix A for details on the three different conditions).

The "general ratings" condition had a rating indicating how each of the users was graded on their participation in general, without any information specifying the type of contribution. With the other two conditions, the rating for each of the discussants provided information on how they were graded by other users in terms of their discussion comments or their news articles, respectively. In the "news articles" condition, users had a rating score that indicated how other users judged the quality of the articles they had posted in the past, although they had no stars rating their comments. The "discussion comments" condition had the opposite information. The rating system showed a score for how other users rated the discussion comments posted by the user in question, even though they had zero stars rating their posted articles.

The information that varied across the conditions was the information on rating specificity regarding the type of contribution. Also, the description of the website changed, depending on the condition. For the "general rating" condition, it indicated the users of this website could rate others' contributions through a star rating system, but the system did not provide specific information about the type of contribution. With the "news articles" and "discussion comments" conditions, participants could read a description indicating that users received two separate ratings: one for news articles and another for discussion comments. Comments, usernames, the number of stars for each user, and the article that initiated the discussion were kept consistent (See Appendix A).


The study took place in the computer laboratory on campus. Upon arriving at the room, the subjects signed a consent form. Then, they were told, both verbally and in written form, they were going to see information on the characteristics of a political user-generated news website, after which they would be able to read some of the content posted by some of its users. The subjects were assigned randomly to one of the three versions of the site, through a javascript code. No contact between the participants was allowed. They were reminded they could take as much time as they needed to read all of the material carefully.

After reading the description of the website, the news article, and the discussion thread, they were taken directly to the questionnaire, which included measurements of the dependent variables, the items that assessed the manipulation, the covariates and the demographics.


The perceived credibility scale was adapted from McCroskey & Teven (1999). It was comprised of three dimensions (i.e., competence, goodwill and trustworthiness). Each dimension was measured using six 7-point Likert type scale items. The perceived informativeness measurements were adapted from Ng & Detenber (2005). This scale was comprised of eight 7-point Likert type scale items that asked respondents to indicate their level of agreement with statements such as "I learned something new from the discussion." and "The discussion provided explanations of policies/ issues." Intention to participate in the discussion used items from Ng & Detenber (2005). The items in this scale asked respondents to indicate their level of agreement with statements such as "At times, while reading, I wanted to participate in the discussion." and "I would like to reply to one or more of the participants in the discussion." This scale included ten 7-point Likert type scale items.

The covariates included internal and external political efficacy measurements, constructed using four 7-point Likert-type scale items for each dimension of political efficacy The respondents also were asked about the frequency of participation in online political discussions (1 = Never to 7 = Very often).

Table 1 shows the means, standard deviations and reliability scores for each of the three dimensions of credibility, discussion informativeness, intention to participate in the discussion, as well as internal and external political efficacy The reliability scores for all the variables were above α = .70, except for external political efficacy When the reliability tests were performed for this covariate, the results suggested that one of the items was affecting the reliability score. The item in question was dropped from the index of external political efficacy, somewhat improving its reliability.

Manipulation Check

To assess if participants had indeed detected the experimental manipulation, they were asked a set of questions concerning the characteristics of the site and a user in the discussion. Specifically the respondents were asked to select the appropriate answer to the following questions. "In The Constituents, users can rate contributions by other users." "In The Constituents, the more stars users have, the better they have been rated by other users." "What is RisingStar2000 s rating of discussion comments only?" "Users in The Constituents get one reputation score for discussion comments posting and another for posting opinion articles."

Then, an additive index was constructed based on the answers to those questions. The means for the "general ratings" condition (M = 3.6, SD = 1.1 ), the "news article" condition (M = 4.7, SD = 0.59) and the "discussion comments" condition (M= 5.4, SD = 0.89) were significantly different. The results of an AN OVA comparing the means of the three conditions revealed the difference was significant (F (2, 95) = 32.17, p = .005). These results indicate the participants detected the manipulation.


To determine which covariates were related significantly to the dependent variables and could be used potentially in the analyses, correlation analyses were run between the three covariates and the three dimensions of credibility, discussion informativeness and intention to participate (Table 2). In the cases where a significant correlation existed, a stepwise regression test was performed between the suggested covariate and the dependent variable. In this way, the probability of any overestimation of the ANCOVA model was reduced.


The research question that asked about a difference in the effect on credibility among the three conditions was tested using AN OVA for each of the dimensions of credibility (see all the ANOVA tables in Appendix B).

For competence, results indicated there was no significant difference among the three conditions (F (2.95) = 1.18, p = 309) (Table 3). The difference in goodwill, another dimension of credibility, was not found to be significant either (F (2.95) = .512, p = .60l) (Table 4); nor was the difference in trustworthiness (F (2.95) = 1.13, p = .327) (Table 5).

In summary, the results showed users were not perceived as more credible in any of the dimensions of this concept when they received a rating for their participation in discussion comments, or a rating for their participation with news articles, or a general rating for their overall participation. Therefore, no difference in the effect of the rating system was found regarding levels of user credibility.


The results of the ANOVA that tested the question about the effect of the different rating system conditions on the perceived informativeness of the discussion showed the existence of a significant difference in perceived discussion informativeness (F (2.95) = 6.48, p = .002) (Table 6). An a priori simple contrast analysis (Table 7) was performed comparing the "discussion comments" rating system condition to the other two conditions. The results showed the "general ratings" condition (M = 4.14, SE = .14) was not perceived as significantly different from the "discussion comments" rating system condition (M = 4.17, SE = .14) with regard to participants' impressions of discussion informativeness.

The difference between the "news article" rating system condition (M = 4.77, SE = .13) and the "discussion comments" rating system condition (M = 4.17, SE = .14) was found to be significant. This difference showed the perceptions of discussion informativeness were significantly higher in the "news article" rating system condition.

Intention to Participate

In the analysis that explored the research question regarding differences in the effect on intentions to participate in the discussion between the three conditions, although results of the correlation analysis suggested internal political efficacy and frequency of participation in online political discussions should be used in the ANCOVA model for this dependent variable, the results of a stepwise regression indicated only internal political efficacy was appropriate. An ANCOVA was run with internal political efficacy as the covariate. The results revealed a significant difference in intentions to participate (F (2.95) = 5.52, p = .002), when correcting for internal political efficacy (Table 8).

An a priori simple contrast comparing the "discussion comments" rating system condition to the other two conditions assessed these differences. The results (Table 9) showed intentions to participate were significantly lower in the "general ratings" condition (M = 3.19, SD = .811) than in the "discussion comments" rating system condition (M = 3.86, SD = 1.11) when correcting for internal political efficacy. The comparison also showed the difference between the "discussion comments" rating system condition (M = 3.86, SD = 1.11) and the "news article" rating system condition. (M = 3.79, SD = 1.09) was not significant. Although intentions in the "discussion comments" rating system condition were higher, the difference was not significant.


Based on the notion that rating systems act as sources of third-party information used to develop impressions about users, this study examined the way in which rating systems that distinguish between types of participation influenced users' perceptions and intentions to participate in discussions on political user-generated news websites. Although the findings suggest ratings of different types of participation have an effect on users' perceptions, more research is needed to identify the type of participation that cues more communal characteristics about user participation in the form of posting news article or in the form of discussion comments. However, the results might suggest that, when forming impressions about others, rating systems can cue communal characteristics, depending on the type of participation the systems rate. The outcome of this study supports the idea that communal third-party information has more importance in online impression formation processes.

Although the results suggest there is no significant difference in the perceptions of credibility facilitated by any of the rating systems evaluated, these findings do not contradict what is predicted by social information processing theory. While the results suggest there is no difference between the different versions of the rating system, this can be explained in terms of the rate at which cues can be communicated through computer-mediated channels.

Following SIP theory, the results regarding the effect on credibility perceptions among the three conditions might stem from the fact that the difference between forms of participation was not perceptible for participants during such a limited exposure. However, evidence already exists to suggest cues influence participation over time (Velasquez, 2012). In this sense, the findings of the present study might be limited by the specific conditions of the experiment and the characteristics of computer-mediated settings regarding the amount of social information communicated.

The results also detected a difference between the "news article" rating system condition and the other two conditions, although the "discussion comments" rating system was not different from the "general rating" system condition. One possible explanation for this finding is that the manipulation might have primed individuals to give more importance to participation through posting news articles. How the news website was described and the way users could participate in it might have suggested to participants that posting articles was the main and most important contribution to the site. Without articles, there would be no other kind of participation.

On this point, in the condition where users received ratings for participation via posting news articles, they were perceived as being more informed, because they were expected to provide more and better support for the arguments they made with respect to the articles. In assessing the informativeness of a discussion, participants then conferred (Flanagin, Metzger & Medders, 2008) the perceptions they had of individuals' informativeness to the discussion in which those individuals participated, perceiving those discussions as being more informative than the discussion under the other two conditions. Another possibility is that article posting cues more communal characteristics compared to the other two rating system conditions.

The results support the idea that information on the type of participation played a role in users' intentions to participate in the discussion. However, the results did not show a difference between the specific participation rating system conditions. This might indicate the need to improve on identifying the type of participation that characterizes communal users. Although the findings suggest it made a difference to specify the type of participation, there did not seem to be enough elements to reach a conclusion as to which of the two communicated more communal characteristics in users who participated in the discussions.

Also, the evidence found in this study is limited to the conditions of the experiment. It is possible other methodologies can be used to complement these findings. Experiments that take into account the temporal dimension in these interactions can provide more evidence as to how different versions of a rating system might affect the perceptions and behavior of those who use sites of this type.

Future studies should do more to refine and improve the operationalization of communal characteristics cued by rating systems. Interviews and focus groups might provide more useful data for constructing a rating system of this nature.

Finally, this study might have given prevalence to participation through posting articles. More control over the characteristics of the news website might be helpful in verifying how the dynamics in participation and the different types of participation might play a role in users' impressions and behavior.


Abele, A. E., Cuddy, A.J. C, Judd, C. M., & Yzerbyt, V. Y. (2008). Fundamental dimensions of social judgment. European Journal of Social Psychology, 38, 1063-1065. doi: 10.1002/ejsp.574

Carpenter, S. (2008). How online citizen journalism publications and online newspapers utilize the objectivity standard and rely on external sources. Journalism & Mass Communication Quarterly, 85(3), 531-548. doi: 10.1177/107769900808500304

Carpenter, S. (2010). A study of content diversity in online citizen journalism and online newspaper articles. New Media & Society, 12(7), 1064-1084. doi: 10.1177/1461444809348772

Cislak, A., & Wojciszke, B. (2008). Agency and communion are inferred from actions serving interests of self or others. European Journal of Social Psychology, 38(7), 1103-1110. doi: 10.1002/ejsp.554

Daft, R. L. & Lengel, R. H. (1984). Information richness: A new approach to managerial behaviour and organizational design. Research in Organizational Behaviour, 6, 191-233.

Domingo, D., Quandt, T., Heinonen, A., Paulussen, S., Singer, J. B., & Vujnovic, M. (2008). Participatory journalism practices in the media and beyond. Journalism Practice, 2(3), 326-342. doi:10.1080/17512780802281065

Metzger, M. J., Flanagin, A. J., & Medders, R. (2010). Social and heuristic approaches to credibility evaluation online. Journal of Communication, 60(3), 413-439.

Goode, L. (2009). Social news, citizen journalism and democracy. New Media & Society, 11(8), 1287-1305. doi:10.1177/1461444809341393

Ho, S. S., & McLeod, D. M. (2008). Social-psychological influences on opinion expression in face-to-face and computer-mediated communication. Communication Research, 35(2), 190-207. doi: 10.1177/0093650207313159

Kaufhold, K., Valenzuela, S., & Zúñiga, H. G. de. (2010). Citizen journalism and democracy: How user-generated news use Relates to political knowledge and participation. Journalism & Mass Communication Quarterly, 87(3-4), 515-529. doi: 10.1177/107769901008700305

Kelly, J. W., Fisher, D., & Smith, M. (2006). Friends, foes, and fringe : norms and structure in political discussion networks. Proceedings of the 2006 international conference on Digital government research, dg.o'06 (p. 412-417). New York, NY, USA: ACM. doi:

Kwak, N, Williams, A. E., Wang, X., & Lee, H. (2005). Talking politics and engaging politics: An examination of the interactive relationships between structural features of political talk and discussion engagement. Communication Research, 32(1), 87-111. doi: 10.1177/0093650204271400

Lampe, C. (2012). The role of reputation systems in managing online communities. In H. Masum and M. Tovey, The reputation Society: How online opinions are reshaping the offline World. Cambridge, MA: The MIT Press.

Lampe, C. & Garrett, R.K. (2006). It's all news to me: The effect of instruments on ratings provision. Presented at HICSS 2007,40th Annual Hawaii International Conference, doi: 10.1109/HICSS.2007.308

Lampe, C, & Resnick, P. (2004). Slash (dot) and burn: Distributed moderation in a large online conversation space. Proceedings of the SIGCHI conference on human factors in computing systems, CHI'04 (p. 543-550). New York, NY, USA: ACM. doi:

Lampe, C. & Johnston, E. (2005). Follow the (slash) dot: Effects of feedback on new members in an online community. Proceedings of the 2005 international ACM SIGGROUP conference on supporting group work, GROUP '05 (p. 11-20). New York, NY, USA: ACM. doi:

Mitchelstein, E., & Boczkowski, P.J. (2009). Between tradition and change A review of recent research on online news production. Journalism, 10(5), 562-586. doi:10.1177/1464884909106533

Metzger, M. J., Flanagin, A.J. & Medders, R. B. (2010). Social and heuristic approaches to credibility evaluation online. Journal of Communication, 60(3), 413-439. doi: 10.111 l/j.l460-2466.2010.01488.x

McCroskey, J. C. & Teven, J. J. (1999). Goodwill: A reexamination of the construct and its measurement. Communication Monographs, 66(1), 90-103. doi: 10.1080/03637759909376464

Nah, S., Veenstra, A. S., & Shah, D. V. (2006). The Internet and anti-war activism: A case study of information, expression, and action. Journal of Computer-Mediated Communication, 12(1).doi: 10.1111/j.10836101.2006.00323.x

Ng, E., & Detenber, B. (2005). The impact of synchronicity and civility in online political discussions on perceptions and intentions to participate. Journal of Computer-Mediated Communication, 10(3). doi: 10.1111/j. 1083-6101.2005.tb00252.x

Preece, J., & Maloney-Krichmar, D. (2005). Online communities: Design, theory, and practice. Journal of Computer-Mediated Communication, 10(4). doi: 10.1111/j.l083-6101.2005.tb00264.x

Robinson, S., & Deshano, C. (2011). Citizen journalists and their third places. Journalism Studies, 12(5), 642-657. doi: 10.1080/146167 0X.2011.557559

Short, J., Williams, E., & Christie, B. (1976). The social psychology of telecommunications. London, England: John Wiley.

Stromer-Galley, J. (2003). Diversity of political conversation on the Internet: Users' perspectives. Journal of Computer-Mediated Communication, 8(3). doi: 10.1111/j.l083-6101.2003.tb00215.x

Tan, K.W. P., Swee, D., Lim, C, Detenber, B. H. & AlsagofT, L. (2007). The impact of language variety and expertise on perceptions of online political discussions. Journal of Computer-Mediated Communication, 13 (1). 76-99. doi: 10.1111/j.l083-6101.2007.00387.x

Tanis, M., & Postmes, T. (2003). Social cues and impression formation in CMC. The Journal of Communication, 53(4), 676-693. doi: 10.111 l/j.l460-2466.2003.tb02917.x

Utz, S. (2010). Show me your friends and I will tell you what type of person you are: How one's profile, number of friends, and type of friends influence impression formation on social network sites. Journal of Computer-Mediated Communication, 15(2), 314-335. doi: 10.1111/J.1083-6101.2010.01522.x

Velasquez, A. (2012). Social media and online political discussion: The effect of cues and informational cascades on participation in online political communities. New Media & Society, 14(8), 1286-1303. doi: 10.1177/1461444812445877

Velasquez, A., Wash, R., Lampe, C, & Bjornrud, T. (n.d.). Latent Users in an Online User-Generated Content Community. Computer Supported Cooperative Work (CSCW), 1-30. doi: 10.1007/s 10606013-9188-4

Walther, J. B. (1992). Interpersonal effects in computer-mediated interaction: A relational perspective. Communication Research, 19(1), 52-90.

Walther, J. B., & Parles, M. R. (2002). Cues filtered out, cues filtered in: Computer-mediated communication and relationships. In M. L. Knapp & J. A. Daly (Eds.), Handbook of interpersonal communication. (3rd ed., pp. 529-563). Thousand Oaks, CA: Sage.

Wojcieszak, M. E., & Mutz, D. C. (2009). Online groups and political discourse: Do online discussion spaces facilitate exposure to political disagreement? Journal of Communication, 59(1), 40-56. doi: DOI: 10.111 l/j.l083-6101.2003.tb00215.x

Wojciszke, B., & Abele, A. E. (2008). The primacy of communion over agency and its reversals in evaluations. European Journal of Social Psychology, 38(7), 1139-1147. doi: 10.1002/ejsp.549

Appendix A

Appendix B


Indexada en: Web of Science - Emerging Sources Citation Index (ESCI), Scopus (Q3), SciELO Citation IndexRedalycPublindex (A2), EBSCO-Fuente Académica, Ulrich's, Google AcadémicoDOAJDialnetLatindex (catálogo)InfoaméricaProQuest - Social Science Journals

Correo electrónico: