| 
  • If you are citizen of an European Union member nation, you may not use this service unless you are at least 16 years old.

  • You already know Dokkio is an AI-powered assistant to organize & manage your digital files & messages. Very soon, Dokkio will support Outlook as well as One Drive. Check it out today!

View
 

Ch6_pt8

Page history last edited by Nina Simon 14 years, 4 months ago

Evaluating Participatory Projects

 

    How can you confidently assert that a participatory project will enable you to carry out some part of your mission or accomplish a program goal? When it comes to traditional institutional programs and activities, we're comfortable making statements about their utility for two reasons. First, we are familiar with these activities--what they cost, what they do, and what kind of impact we perceive them to have. Second, hopefully, at some point traditional activities have been evaluated, so there is some hard data behind presumed outcomes. 

    When it comes to experimental or new activities, familiarity isn't a workable justification. Many new initiatives are so unfamiliar that even reasoned arguments about their potential cannot overcome fear of the unknown. In these cases, people look to evaluation as a way to "prove" projects' value and outcomes, but evaluation is not a silver bullet. There are too many different ideas of what "success" looks like for institutions, participants, and audience to conclusively evaluate a project's value. For some people, the evaluation question is "can you demonstrate that this participatory project will bring in more paying visitors?" For others it's more important that a project engage people in a particular kind of engagement or learning. 

    Here's the ugly truth: if your institution's leadership is opposed to visitor participation, no amount of evaluation will change their minds.(This is in the demonizing tone again, seemingly for shock value? Is it really true? Does it advance the cause to speak to it? I could see mention in an informal conversation, but in a book that is intended to influence the future of museums and their role in society, is it helpful? SB) In some art museums, participatory spaces that attract huge crowds of energized and dedicated art-makers are denigrated as "petting zoos" by those with a more controlling curatorial mindset. There will always be history museum leaders who see multi-vocal exhibitions as overly relativistic, or science museum directors who don't believe that engaging visitors in the process of doing science is as valuable as showing them the accomplishments of scientific greats throughout history.

    This doesn't mean that evaluation is not important or useful—but it’s only useful to those who are ready to learn from it. On the contrary, evaluation is an essential (and often neglected) practice that allows us to assess and improve projects. Lack of good evaluation of participatory projects is probably the greatest contributing factor to their slow acceptance and use in the museum field. Currently, many participatory projects are framed as experiments and are not integrated into standard project cycles that involve full formative and summative evaluation. (And yet, grants fund many of these experimental participatory programs, and usually do include requirements for evaluation. An experiment includes evaluation as part of the process, by definition. SB)

(Part of the issue is that evaluation is not practiced in many museum settings, not just participatory contexts. Reflection is limited. Evaluation is frequently only activities-based, such as: we got x number of people in the door, or y number of inches of free press--rather than outcomes-based. There is a significant body of literature working to reframe evaluation in government, education and business to focus on outcomes. Worth drawing on here to advance the field in museums.SB)

(Great point and I am adding a paragraph in here about outcomes-based eval, which I'm hearing a lot about in museums these days - NS)

    These projects need to be evaluated, but they also require new evaluative techniques specific to the unique nature of participation. They introduce new questions: What does impact look like when visitors are not only consuming content but helping to create it? How do we evaluate both the products of participation and the skills learned in participating in the process? If a project is co-designed by an institution and a community group for a general audience, whose vision of success should the project be evaluated against?

    To answer these questions, start with the goals of the institution or the project rather than adapting pre-existing evaluative techniques. In theory, every evaluation tool is developed to measure the extent to which a project has achieved its goals, but in the case of familiar project types like exhibitions, project goals are often written interchangeably, with evaluators measuring new content in familiar contexts.

    Participatory projects are different contexts.(It would be useful to characterize the 'difference' of the context. SB)  Recall the various wide-ranging metrics the Wing Luke Asian Museum uses to evaluate the extent to which it is achieving its community mission. When it comes to their participants, they assess the extent to which "people contribute artifacts and stories to our exhibits. With regard to audience members, they consider whether "constituents are comfortable providing both positive and negative feedback" and "community members return time and time again." Institutionally, they evaluate staff "relationship skills" and the extent to which "young people rise to leadership." And with regard to broader institutional impact, they even look at the extent to which "community responsive exhibits become more widespread in museums." Some of these may sound entirely qualitative, but there are quantitative measures that could be gleaned from each one. For example, the metric around both positive and negative visitor comments is one that reflects their specific interest in supporting dialogue, not just receiving compliments. Many museums solicit comments from visitors, but I suspect that few, if any, code the comments for their diversity of opinion.

    If one of your goals is to, for example, become a "safe space for difficult conversations," how would you evaluate your ability to do so? The starting point for most institutions is to host exhibits or programs on provocative topics likely to stir up "difficult conversations." But offering an exhibition about AIDS or racism does not ensure dialogue nor the feeling of a safe space. Programs like the Science Museum of Minnesota and Levine Museum of the New South's talking circles or the Living Library project, provide explicit, facilitated venues for difficult conversations, which allow these institutions to more fully evaluate the emotional impact of dialogue relative to the content at hand. Museums interested in this goal might also evaluate their projects by coding the frequency, length, content and tone of visitors' comments on talk-back boards, or they might follow up with visitors post-visit to ask specifically about what kind of dialogue (if any) the museum experience sparked. (Good examples of the difference between activity and outcome. SB)

    If you have clear goals for your project, you can likely derive effective evaluative tools from those goals. But many participatory projects, especially small experiments, don't start with goals; they start with a hunch or an idea.(Suggest that 'how' something is to be done can be a hunch or idea, but that the underlying 'what' is needed, is not that unknown or experimental. The issue is experimenting with the participatory tool to achieve the institutional goals or resolve the institutional problems that are already evident. SB) Many people don't know what the possible goals are for participation because they see the techniques themselves as new and potentially untested in the museum environment. While I hope the frameworks and case studies in this book have helped you better articulate the participatory goals that might drive your next project, I don't want to discourage anyone from trying something because you think it might be fun or valuable, even if you can't articulate exactly how at the starting point. This is how experiments get started, with broad goals, hazy guesses, and a lot of trying things out.

    If you want to freely experiment with small participatory endeavors, it's useful to develop a framework in which the experiments will be evaluated relative to each other over time. At the Museum of Life and Science, Beck Tench developed a honeycomb diagram to display the seven core goals MLS was trying to achieve with their forays into social participation. For each experiment, she and other staff members shaded the cells of the honeycomb that represent that experiment's intended goals. While this is not a robust evaluative technique for assessing the extent to which a project has accomplished its goals, it gave the staff at MLS a language for contextualizing the goals they might apply to participatory projects, and these could then be used to develop more exhaustive evaluative tools to measure the extent to which the goals were met. Like many participatory projects, Tench's honeycomb tool is adaptive. It is a simple framework that can be applied against an evolving project, and it is accessible and usable by team members of all levels of evaluation experience and expertise, including visitors and community participants.

(Would like to see link to the Tench honeycomb, or to have it included right in this section. SB)

Yes, will be included - NS

 

 

Continue to the next section, or return to the outline.

 

Comments (1)

Louise Govier said

at 4:05 am on Nov 24, 2009

This is so great - both empowering in terms of suggesting how you can develop your own evaluation criteria, while also saying don't sweat it if it's not immediately obvious.

You don't have permission to comment on this page.