| 
  • If you are citizen of an European Union member nation, you may not use this service unless you are at least 16 years old.

  • You already know Dokkio is an AI-powered assistant to organize & manage your digital files & messages. Very soon, Dokkio will support Outlook as well as One Drive. Check it out today!

View
 

Chapter 6, Part 2: Evaluating Participatory Projects

Page history last edited by Nina Simon 14 years, 6 months ago

THIS IS A SECTION OF Chapter 6: Pitching, Managing, and Evaluating Participatory Projects.  PLEASE FEEL FREE TO EDIT THIS PAGE WITH YOUR COMMENTS AND QUESTIONS.

 

Evaluating Participatory Projects

 

How can you say with confidence that a participatory project will enable you to carry out some part of your mission or accomplish a program goal?  When it comes to traditional institutional programs and activities, we're comfortable making statements about their utility for two reasons.  First, we are familiar with these activities--what they cost, what they do, and what kind of impact we perceive them to have.  Second, hopefully, at some point traditional activities have been evaluated, so there is some hard data behind our presumed outcomes.  

 

When it comes to experimental or new activities, familiarity isn't an option.  In fact, many of these new initiatives are so unfamiliar that even reasoned arguments about their potential cannot overcome fear of the unknown.  In these cases, people look to evaluation as a way to "prove" projects' value and outcomes. But evaluation is not a silver bullet.  There are too many different ideas of what "success" looks like for institutions, participants, and audience to conclusively evaluate a project's value.  For some people, the evaluation question is "can you demonstrate that this participatory project will bring in more paying visitors?"  For others it's more important that a project engage people in a particular kind of engagement or learning.  Here's the ugly truth: if your institution's leadership is opposed to visitor participation, no amount of evaluation will change their minds.  In some art museums, participatory spaces that attract huge crowds of energized and dedicated art-makers are denigrated as "petting zoos" by those with a more controlling curatorial mindset.  There will always be history museum leaders who see multi-vocal exhibitions as overly relativistic, or science museum directors who don't believe that engaging visitors in the process of doing science is as valuable as showing them the accomplishments of scientific greats throughout history.

 

This doesn't mean that evaluation is not important or useful.  On the contrary, evaluation is an essential (and often neglected) practice that allows us to assess and improve projects.  Lack of good evaluation of participatory projects is probably the greatest contributing factor to their slow acceptance and use in the museum field.  Currently, many participatory projects are framed as experiments and are not integrated into standard project cycles that involve full formative and summative evaluation.  These projects need to be evaluated, but they also require new evaluative techniques specific to the unique nature of participation.  These projects introduce new evaluative questions.  What does impact look like when visitors are not only consuming content but helping to create it?  How do we evaluate both the products of participation and the skills learned in participating in the process?  If a project is co-designed by an institution and a community group for a general audience, whose vision of success should the project be evaluated against?

 

To answer these questions, we need to start with the goals of the institution and the projects rather than adapting pre-existing evaluative techniques.  In theory, every evaluation tool is developed to measure the extent to which a project has achieved its goals, but in the case of familiar project types like exhibitions, project goals are often written interchangeably, with evaluators measuring new content in familiar contexts.  Participatory projects are different contexts.  Recall the various wide-ranging metrics the Wing Luke Asian Museum uses to evaluate the extent to which it is achieving its community mission.  When it comes to their participants, they assess the extent to which "people contribute artifacts and stories to our exhibits.  With regard to audience members, they consider whether "constituents are comfortable providing both positive and negative feedback" and "community members return time and time again."  Institutionally, they evaluate staff "relationship skills" and the extent to which "young people rise to leadership."  And with regard to broader institutional impact, they even look at the extent to which "community responsive exhibits become more widespread in museums."  Some of these may sound entirely qualitative, but there are quantitative measures that could be gleaned from each one.  For example, the metric around both positive and negative visitor comments is one that reflects their specific interest in supporting dialogue, not just receiving compliments.  Many museums solicit comments from visitors, but I suspect that few, if any, code the comments for their diversity of opinion. 

 

If one of your goals is to, for example, become a "safe space for difficult conversations," how would you evaluate your ability to do so?  The starting point for most institutions is to host exhibits or programs on provocative topics likely to stir up "difficult conversations."  But offering an exhibition about AIDS or racism does not ensure dialogue nor the feeling of a safe space.  Programs like the Science Museum of Minnesota and Levine Museum of the New South's talking circles, or the Living Library project, provide explicit, facilitated venues for the conversations, which allow these institutions to more fully evaluate the emotional impact of dialogue relative to the content at hand.  Museums interested in this goal might also evaluate their projects by coding the frequency, length, content and tone of visitors' comments on talk-back boards, or they might follow up with visitors by phone to ask specifically about what kind of dialogue (if any) the museum experience sparked. 

 

If you have clear goals for your project, you can likely derive effective evaluative tools from those goals.  But many participatory projects, especially small experiments, don't start with goals; they start with a hunch or an idea.  Many people don't know what the possible goals are for participation, because they see the techniques as new and potentially untested in the museum environment.  While I hope the frameworks and case studies in this book have helped you better articulate the participatory goals that might drive your next project, I don't want to discourage anyone from trying something because you think it might be fun or valuable, even if you can't articulate exactly how at the starting point.  This is how experiments get started, with broad goals, hazy guesses, and a lot of trying things out.

 

If you want to freely experiment with small participatory endeavors, it's useful to develop a framework in which the experiments will be evaluated relative to each other over time.  At the Museum of Life and Science, Beck Tench developed a honeycomb diagram to display the seven core goals that she felt the MLS was trying to achieve with their forays into social participation.  For each experiment, she and other staff members shade the cells of the honeycomb that represent that experiment's intended goals.  While this is not a robust evaluative technique for assessing the extent to which a project has accomplished its goals, it gives the staff at MLS a language for contextualizing the goals they might apply to participatory projects, and these can then be used to develop more exhaustive evaluative tools to measure the extent to which the goals were met.  Like many participatory projects, Tench's honeycomb tool is adaptive.  It is a simple framework that can be applied against an evolving project, and it is accessible and usable by team members of all levels of evaluation experience and expertise, including visitors and community participants. 

 

Developing Evaluation Tools for Participation

 

If you want to develop an evaluative tool for measuring participation at your institution, you need to be able to articulate your goals (either broadly or specifically) and identify what mechanics of the visitor experience will be measurable against these goals.  Here are some things to think about that distinguish participatory projects as unique when developing evaluation tools.

 

Make sure you are measuring for the unique behaviors and outcomes that participatory projects present.  For example, many participatory projects introduce opportunities for audience and participants to learn skills and behaviors that are not within the traditional set of institutionally-evaluated metrics.  If your standard summative evaluation of an exhibition is about the extent to which visitors have learned specific content elements, switching to an evaluative tool that allows you to assess the extent to which visitors have exercised creative, dialogic, or collaborative functions is quite a leap.  There are several documents out there, including Confronting the Challenges of Participatory Culture: Media Education for the 21st Century and Museums, Libraries, and 21st Century Skills, that list participatory skills and explain how each has relevance to 21st century learning environments.  These documents may help you articulate the skills you hope your participants to learn, as well as helping you identify how these skills relate to institutional goals overal.

 

Set evaluative goals not just for participants, but for all members of the participatory experience.  Many participatory projects are currently evaluated solely from the perspective of participants' experiences and learning.  However, the institutional and audience experience of these projects are equally important.  For each project, you should be able to articulate goals not only for the participant who actively collaborates with the institution, but also for the staff who manage the process, and, most importantly, for the audience that consumes the participatory product.  In some cases, the audience evaluation may be identical to a standard program evaluation (if, for example, a participatory process has created a fairly standard exhibit product), but in cases where the audience and participants are the same people (for example, in on-the-floor comment boards and participatory opportunities), the audience evaluation will differ.   In addition to evaluating the experience for each group, you may want to evaluate the extent to which certain individuals are inclined or disinclined to participate in different ways.  Are there any similarities among the visitors who elect to write a comment and those who choose to read silently?  Can you identify your creators, critics, and consumers, and does altering the design strategy alter the profiles of these groups?

 

If you are engaging in a process-based participatory project, plan for incremental and adaptive evaluation.  If you are going to work with community members for three years to design a new program, it's not useful to wait until the end of the three years to evaluate the overall project.  You need to be able to evaluate as you go, and ideally, to find ways to involve participants in the evaluation.  This can be messy and can involve changing design techniques or experimental strategies along the way, which, as noted in the stories of The Tech Virtual and Wikipedia Loves Art, can lead to participant frustration.  But these projects are not static; they evolve over time.  And continual evaluation and redirection can help complex or new projects stay aligned to their ultimate goals while making the project work for everyone involved.

 

Continual evaluation can also give you a useful feedback loop that provides new design insights that can ultimately lead to better visitor experiences. Consider the humble comment board.  If you have one in your institution, consider changing the material provided for visitors to write on or with and see how it changes the content and volume of comments produced.  Change the prompt questions, add a mechanism by which visitors can easily write "response cards" to each other, or experiment with different strategies for staff curation/comment management.  For example, in the Advice exhibition, we learned (through evaluation) that participants wrote very different pieces of advice with markers on a fake bathroom stall than on post-its on gallery walls.  We learned that using multi-colored post-its of different sizes, instead of just one type, created a more appealing visual audience experience.  We learned that people were more likely to respond to each others' questions if the questions were marked in some way (in our case, written on bigger post-its).  If we had only offered one of these options, we might not have learned (through evaluation) how different contributory frameworks affect the extent to which visitors comment on each other's work, ask questions, and make declarative statements. 

 

Finally and most importantly, you need to find ways to measure not just the outputs of the participatory project but also its outcomes.  We are very used to measuring quantifiable outputs--how many people visited, how many comments logged, how many sculptures made.  But how does the number of comments on a comment board relate to the impact of the comment board, both on those who write comments and those who read them?  While it can be time-consuming, adopting evaluative techniques that focus on the specific content of participatory contributions or actions can yield more useful information about the impact of projects.  For example, if you embark on a personalization project with the goal of creating more long-term, deep relationships with visitors, you must be able to measure not just how many people use the personalization mechanism but how frequently those visitors return and what they do when they return.  You should also measure how well staff members are able to identify and be responsive to frequent visitors.  If there are true relationships being formed, staff and participants should know each other's names and express an interest in getting to know each other better in the context of the institution. 

 

Science Buzz, an online social network managed by the Science Museum of Minnesota, is the subject of an extensive evaluation study by researchers at Michigan State University as part of the IMLS Take Two project.  Science Buzz, a National Science Foundation-funded project, is a multi-author blog community website that invites both museum staff and online participants to author, share, and comment on articles related to contemporary science news and issues.  Science Buzz also includes physical museum kiosks in several science centers throughout the US, but for the purposes of the Take Two research, researchers focused solely on the discourse on the Science Buzz website.

 

Science Buzz is a complicated beast.  Since 2006, staff and visitors have posted and commented on over 1,500 topics, and the blog enjoys high traffic from an international audience.  But the Take Two project wanted to go beyond basic numbers (how many posts, comments, registered users, and views) to tackle a few more basic and profound questions.  They are focusing their study on four big efforts: describing the online community who use Science Buzz, describing the nature of discourse on the site, evaluating the extent to which Science Buzz supports "knowledge building" activities for users, and assessing how the practice of managing Science Buzz affects institutional culture and practice.  The first two of these are descriptive and are focused on better undrestanding the user profile and the dialogic ways that people engage with each other on the website.  The last two are about impact outcomes--both for participants and for staff.  Note that there is no research question about impact on non-participating audience members, those who consume but do not contribute content to Science Buzz. 

 

To evaluate the "knowledge building" impact of Science Buzz, the Take Two researchers coded individual statements in blog posts and comments for 20% of posts with fifteen comments or more, grouping them under "building an argument," "exploring new ideas," "building a writer's identity," and "building a community identity."  By coding individual statements, the researchers were able to spot patterns in rhetoric used on the site and to identify shifts over a given conversation that might represent individual and or interpersonal knowledge-building.  They were also able to explore the extent to which Science Buzz might fulfill individuals' personal scientific learning and argumentative goals versus interest in building a community around science learning. 

 

It's no accident that the Take Two team worked with researchers of rhetoric to design an evaluative tool for Science Buzz.  The evaluation of participatory projects may have more in common with research from outside the world of cultural institutions than from within.  Whether your project requires support from the field of motivational psychology, community development, or conflict resolution, there are researchers in these related fields whose work can inform participatory projects in museums.  In particular when it comes to studying online participation, there is a growing volume of industry and academic research available on everything from who participates to how and why.  By partnering with researchers from other fields, museum evaluators can have their own participatory, collaborative learning experiences to the mutual benefit of all parties.

 

Because participation is diverse, no single technique is best-suited to its study.  In the case of Take Two, the rhetoric-based approach is by no means perfect nor the only way to study Science Buzz.  It treats users across the site as anonymous particiants, ignoring the heavy and unique role of staff voices on the blog site as well as the frequency with which users return to make subsequent arguments or posts over across different topics on the site.  It also does not track the profile of the non-registered consumer versus the registered consumer versus the registered commenter versus the registered author, all of which could be seen as rungs on the ladder of participation.  Fortunately, informal research activities, like research staff blogs and Beck Tench's regular public evaluation presentations, have supplanted the formal research to examine some of these questions.  But the published Take Two research focuses explicitly on building science knowledge through online discourse, and they picked the evaluative tools they believed would help them address these questions in the Science Buzz context.  Part of the challenge of the project was simply developing the analytic tools to study a familiar question (science knowledge-building) in a new research environment (online social network). Unfortunately and somewhat ironically, the Take Two research team was not able to be as flexible and transparent with their work as the projects they studied, but the research is a first step in the right direction.  The team focused on mission-driven questions, found reasonable tools to answer those questions, rigorously applied those tools, and is publishing the results.  I am hopeful that many future teams will approach the question of evaluation with a comparable level of rigor when considering what and how we might measure participation and how it might relate to overall goals for a project or institution. 

 

Activity: From Goals to Outcomes to Measurements

Write down your goals for your participatory project for the institution, participants, and audience.  For each goal, brainstorm five outcomes that you would expect to see if that goal was achieved.  For each outcome, determine how you might measure its incidence.  Consider the overall blend of measurements, and work in the opposite direction, making sure that together, the proposed measurements will allow you to comprehensively assess your ability to reach your goals.  Over the course of your project, your understanding of appropriate outcomes, indicators, and measures may change, but your goals should stay constant. 

 

Comments (0)

You don't have permission to comment on this page.