| 
  • If you are citizen of an European Union member nation, you may not use this service unless you are at least 16 years old.

  • You already know Dokkio is an AI-powered assistant to organize & manage your digital files & messages. Very soon, Dokkio will support Outlook as well as One Drive. Check it out today!

View
 

Ch6_pt9

Page history last edited by Sarah Barton 14 years, 3 months ago

Developing Evaluation Tools for Participation

 

Evaluation is a very important part of any project. I think this part would benefit of a deeper approach both theoretical and practical (CR) (I agree. SB)

 

    If you want to develop an evaluative tool for measuring participation at your institution, you need to be able to articulate your goals (either broadly or specifically) and identify what mechanics of the visitor (and staff SB) experience will be measurable against these goals. Here are some things to think about that distinguish participatory projects as unique when developing evaluation tools.

    Many participatory projects introduce opportunities for audience and participants to learn skills and behaviors that are not within the traditional set of institutionally-evaluated metrics. If your standard summative evaluation of an exhibition is about the extent to which visitors have learned specific content elements, switching to an evaluative tool that allows you to assess the extent to which visitors have exercised creative, dialogic, or collaborative functions can quite a leap. There are several documents out there, including Confronting the Challenges of Participatory Culture: Media Education for the 21st Century and Museums, Libraries, and 21st Century Skills, that list participatory skills and explain how each has relevance to 21st century learning environments. These documents may help you articulate the skills you hope your participants to learn, as well as helping you identify how these skills relate to institutional goals overal.

    Many participatory projects have been evaluated solely from the perspective of participants' experiences and learning, ignoring the needs of the broader public and institutional audiences. For each project, you should be able to articulate goals not only for participants who actively collaborate with the institution, but also for the staff who manage the process, and, most importantly, for the audience that consumes the participatory product (and also for the institution itself. SB). (great formulation, I'd put it in bold CR). In some cases, the audience evaluation may be identical to a standard program evaluation (if, for example, a participatory process has created a typical exhibit product). But in cases where the audience and participants are the same people (for example, in on-the-floor comment boards and participatory opportunities), the audience evaluation will differ. In addition to evaluating the experience for each group, you may want to evaluate the extent to which certain individuals are inclined or disinclined to participate in different ways. Are there any similarities among the visitors who elect to write a comment and those who choose to read silently? Can you identify your creators, critics, and consumers, and does altering the design strategy alter the profiles of these groups?

    These kinds of questions imply iterative evaluation processes. Incremental and adaptive research is particularly important if your project is itself process-based.(It is useful to make the distinction between process-based and content-based evaluation, as you indicate. Two very different worlds, both of which could be applicable in evaluation of participatory tool. Might be useful to create a little matrix comparing evaluation for process vs content. SB)  If you are going to work with community members for three years to design a new program, it's not useful to wait until the end of the three years to evaluate the overall project. You need to be able to evaluate as you go, and ideally, to find ways to involve participants in the evaluation. This can be messy and can involve changing design techniques or experimental strategies along the way, which, as noted in the stories of The Tech Virtual and Wikipedia Loves Art, can lead to participant frustration. But these projects are not static; they evolve over time. And continual evaluation and redirection can help complex or new projects stay aligned to their ultimate goals while making the project work for everyone involved.

    Continual evaluation can also give you a useful feedback loop that provides new design insights that can ultimately lead to better visitor experiences. Consider the humble comment board. If you have one in your institution, consider changing the material provided for visitors to write on or with and see how it changes the content and volume of comments produced. Change the prompt questions, add a mechanism by which visitors can easily write "response cards" to each other, or experiment with different strategies for staff curation/comment management. For example, in the Advice exhibition, we learned (through evaluation) that participants wrote very different pieces of advice with markers on a fake bathroom stall than on post-its on gallery walls. We learned that using multi-colored post-its of different sizes, instead of just one type, created a more appealing visual audience experience. We learned that people were more likely to respond to each others' questions if the questions were marked in some way (in our case, written on bigger post-its). If we had only offered one of these options or hadn’t done the research, we might not have learned how different contributory frameworks affect the extent to which visitors comment on each other's work, ask questions, and make declarative statements. (Useful observations. SB)

    Finally and most importantly, find ways to measure not just the outputs of the participatory project but also its outcomes. We are very used to measuring quantifiable outputs--how many people visited, how many comments logged, how many sculptures made.  But how does the number of comments on a comment board relate to the impact of the comment board, both on those who write comments and those who read them? While it can be time-consuming, adopting evaluative techniques that focus on the specific content of participatory contributions or actions can yield more useful information about the impact of projects. For example, if you embark on a personalization project with the goal of creating more long-term, deep relationships with visitors, you must be able to measure not just how many people use the personalization mechanism but how frequently those visitors return and what they do when they return, and how well staff members are able to identify and be responsive to frequent visitors. If true relationships are being formed, staff and participants will know each other's names and express an interest in getting to know each other better in the context of the institution.

    Science Buzz, an online social network managed by the Science Museum of Minnesota, was the subject of an extensive evaluation study by researchers at Michigan State University as part of the IMLS Take Two project.  A National Science Foundation-funded project, Science Buzz is a multi-author blog community website and exhibit that invites museum staff and outside participants to author, share, and comment on articles related to contemporary science news and issues. Science Buzz also includes physical museum kiosks in several science centers throughout the US, but for the purposes of the Take Two research, researchers focused solely on the discourse on the Science Buzz website.

    Science Buzz is a complicated beast. From 2006 to 2008, staff and visitors posted and commented on over 1,500 topics, and the blog enjoys high traffic from an international audience. But the Take Two project wanted to go beyond basic outputs (how many posts, comments, registered users, and views) to tackle a few more basic and profound questions. They focused their study on four efforts: describing the online community who use Science Buzz, describing the nature of discourse on the site, evaluating the extent to which Science Buzz supports "knowledge building" activities for users, and assessing how the practice of managing Science Buzz affects institutional culture and practice. The first two of these are descriptive and are focused on better understanding the user profile and the dialogic ways that people engage with each other on the website. The last two are about impact outcomes both for participants and for staff. Unfortunately, there was no research question about impact on non-participating (non-actively participating, as they are still consumers, a different level of engagement. Might benefit from a scale of engagement, derived from that used in public participation in public infrastructure. One model is from IAP2 that includes the range from inform, consult, involve, collaborate, empower: http://www.iap2.org/associations/4748/files/IAP2%20Spectrum_vertical.pdf. SB)

audience members, those who consume but do not contribute content to Science Buzz.

    To evaluate the "knowledge building" impact of Science Buzz, the Take Two researchers coded individual statements in blog posts and comments for 20% of posts with fifteen comments or more, grouping them under "building an argument," "exploring new ideas," "building a writer's identity," and "building a community identity." By coding individual statements, researchers were able to spot patterns in rhetoric used on the site and to identify shifts over a given conversation that might represent individual and or interpersonal knowledge-building. They were also able to explore the extent to which Science Buzz had fulfilled individuals' personal learning and argumentative goals versus the institutional interest in building a community around science learning.

    It's no accident the Take Two team worked with researchers of rhetoric to design an evaluative tool for Science Buzz. The evaluation of participatory projects may have more in common with research from outside the world of cultural institutions than from within. Whether your project requires support from the field of motivational psychology, community development, or conflict resolution, there are researchers in these related fields whose work can inform participatory projects in museums. In particular when it comes to studying online participation, there is a growing volume of industry and academic research available on everything from who participates to how and why. By partnering with researchers from other fields, museum evaluators can join participatory, collaborative learning communities to the mutual benefit of all parties.

    Because participation is diverse, no single evaluative technique is best-suited to its study In the case of Take Two, the rhetoric-based approach was by no means perfect nor the only way to study Science Buzz. It treated users across the site as anonymous particiants, ignoring the heavy and unique role of staff voices on the blog site as well as the frequency with which users returned to make subsequent arguments or posts over across different topics on the site. The Take Two research also did not track the profile of the non-registered consumer versus the registered consumer versus the registered commenter versus the registered author, all of which could be seen as rungs on the ladder of participation. 

    Fortunately, informal research activities like research staff blogs and Beck Tench's regular public evaluation presentations have supplemented the Take Two formal research to examine some of these questions. But the published Take Two research focuses explicitly on building science knowledge through online discourse, and they picked the evaluative tools they believed would help them address these questions in the Science Buzz context. Part of the challenge of the project was simply developing the analytic tools to study a familiar question (science knowledge-building) in a new research environment (online social network). Unfortunately and somewhat ironically, the Take Two research team was not able to be as flexible and transparent with their work as the projects they studied, but the research was a first step in the right direction. The team focused on mission-driven questions, found reasonable tools to answer those questions, rigorously applied those tools, and published the results. I am hopeful that many future teams will approach the question of evaluation with a comparable level of rigor when considering what and how to measure in participatory projects. (Another interesting parallel model was work for Minerals Management in Alaska, evaluating engagement by local communities in offshore drilling policy. RESEARCHING TECHNICAL DIALOGUE WITH ALASKAN COASTAL COMMUNITIES: ANALYSIS OF THE SOCIAL, CULTURAL, LINGUISTIC, AND INSTITUTIONAL PARAMETERS OF PUBLIC/AGENCY COMMUNICATION PATTERNS. SB See: http://www.mms.gov/alaska/reports/2009rpts/2009_MMS_tech_dialogue/2009_MMS_tech_dialogue.pdf

(After reading this section, wondering if you might consider design of book that will highlight all case studies and specific institutional examples as separate font, in boxes or other technique. It would help the reader move through all the text and hold the key ideas through the details of the example. SB) 

 

Continue to the next section, or return to the outline.

 

Comments (0)

You don't have permission to comment on this page.