Platforms and Values
Every networked platform uses some kind of algorithm to determine which content, experiences, or elements will be promoted and which will be harder to access. Designers are not always attentive to the dynamics that these algorithms generate, which can affect the overall user experience of the platform and the presentation of its values. When planning participatory experiences, it’s important to consider not only the mechanics of the activities and outputs offered but also the related values the platform will support.
Consider platforms in which visitors contribute comments, creative products, or multi-media to an institutionally-directed exhibit or program. Museums tend to use one of two types of platforms for display of visitor-generated content: those that value recency or those that value quality (or a mixture of both). Platforms that value recency put the newest visitor comments or videos front and center, and previous comments are either archived or accessible on secondary layers. Platforms that value quality use some curation system (almost always staff-led) to select featured content for presentation to visitors.
Each of these simple platforms has a value system baked into its functionality. The recency approach reflects a value on visitor creation. If you show the most recent content to visitors, that communicates, "if you make a video or a comment at this station, it will be available for everyone to see, right now." This incentivizes comment-creation among the visitors for whom momentary fame (or fame in the eyes of their current companions) is appealing. However, recency is deficient as a model on its own because it makes no claim for the relative value of the comments made. If I walk into a comment space after a group of teenage boys has been inside, I may see a lot of silly contributions and be turned off by the experience. Worse, my exposure only to the most recent contributions may distort my understanding of the intent of the exhibit or the potential for how I might use it. Recency drowns out the gems.
The quality approach reflects a value on comment content, not timing. This solves the problem of displaying poor submissions by only featuring good ones, but it introduces its own challenges. Strangely, most museums are reluctant to allow visitors to rate the quality of others' contributions and instead use an entirely staff-based model for moderation and selection of featured content. This can lead to exhibits piled with visitor contributions languishing for weeks until the beleaguered staff member in charge can sort through them and select the best for display. Featured content may be updated rarely or not at all, and as such, loses the ability to be responsive to current visitor interests and issues. Quality-focused platforms may also disincentivize sharing by setting a standard for visitor-generated content that is too high for the average visitor to feel she can achieve. In many story-sharing platforms, museums solicit experts or celebrities to create the featured content (often using production techniques that are higher quality than those available to visitors), in which case there are obvious differences between the featured content and visitor-generated content. Featured content promotes quality, but it can also promote the kind of elitist black box approach that many museums are trying to avoid in participatory projects.
There are successful ways to blend recency and featured content in ways that can offer spectating visitors dynamic and interesting content to consume. One of the obvious ways to do this is to invite visitors themselves to rate and sort the content. As noted in the first chapter, there are many more people who enjoy spectating and critiquing content than there are those who enjoy creating it. If visitors can sort and rate visitor-generated content, it takes the load off of staff (who rarely have the time to do it). It also provides "critical" visitors with an activity that makes useful outputs out of their frustration at poor contributions and delight at quality ones. This activity is not only about expressing likes and dislikes; it's a useful cognitive activity that promotes learning how to make judgments and connections among content sources. There are many historians, curators, and scientists who spend more time evaluating and analyzing content than generating it. Why not promote a participatory activity that reflects these important learning skills?
By incorporating the networked preferences of visitors over time, a visitor-generated exhibit could dynamically provide higher-quality, more dynamic offerings to spectators. But with this potential comes a justifiable worry that visitors will just select the funniest items, or the ones made by their friends, or will generally use criteria that is not in line with museum values. For this reason, it's essential for museums to think bigger than just recency and featured content. There are many other values upon which we can design platforms for content-sharing. Let’s take a look at how the same system could evoke two different values: diversity and reflective discourse.
Imagine a video kiosk in a history museum intended to invite visitors to "share your story" related to a historic event on display. A platform that values diverse sharing might be designed with multiple non-identical kiosks that use different questions and theming to solicit different perspectives on the same experience. Visitors who act as critics might be asked not to rate videos or pick their favorites but instead to sort them into different perspective categories. At another station, critics might be able to then select favorites within each category. In this scenario, spectators would not just see "the best" videos overall, but the best videos reflecting a diversity of perspectives.
Now imagine the same exhibit with a different platform that values reflective discourse. This exhibit might use heavier consistent theming across the video creation stations. Visitors might be prompted to select another visitor's video as a starting point and make a video in response to other visitors rather than reacting to an institutionally-provided query. For critics, the system would focus on commenting rather than rating or sorting. Videos might be featured not based on the diversity of perspectives represented, but on the chain of response videos generated. In this scenario, spectators would see long multi-vocal dialogues played out across videos and text comments.
Two platforms, two designs, two different goals and desired visitor experiences. Let's leave the world of the theoretical video kiosk and take a look at two real platforms--ScratchR and Signtific--that are successfully designed to reflect distinct values in their networked experiences.
Continue to the next section, or return to the outline.
Comments (0)
You don't have permission to comment on this page.