In 2011, we embarked upon a quest to consider how best to factor conversations about quality into decisions about value. We asked ourselves: “Can culture be measured?”
The origin of this question was the result of mutually frustrating exchanges with decision-makers who were stuck making choices about value based on box office or bums on seats, rather than on the basis of “quality”.
The answer to our question was that Culture is already being measured. However, instead of relying on feedback from a combination of public, peer, and self assessors, a handful of individuals often make judgments about quality on behalf of the public behind closed doors. Decisions being made about public value are not publicly accountable.
These types of opaque quality judgments ultimately hurt the sector, as they do nothing to change the status quo. Measurement of quality is inconsistent and unclear, which results in decision-makers relying on other metrics in order to allocate resources. In an environment where multiple sectors have equally valid claims to those resources, those who can more clearly articulate their value win a larger slice of the pie, while those who do not have their slice taken away. In this environment, anecdote loses out to data.
At Culture Counts, we know “bums on seats” does not accurately represent the value of culture. We’ve all had experiences that move or inspire us, make us reflect on a work’s relevance to the world, or those that leave us wanting to know more about the creator of that work. We know anecdotally these experiences are valuable. How can we describe these experiences clearly and consistently, and collect feedback about those experiences to produce insight for the sector?
We asked cultural organisations around the world the same question. There were clear themes of quality that emerged from these conversations – things that are at the heart of what cultural organisations do. What if we were to capture people’s perceptions framed by these standard themes of quality? Working with those organisations, the repeating themes were refined into the quality metrics we now use across the world – metrics that are specific, transparent, and generated by the sector.
Having specific outcomes of quality does not make measurement of quality simplistic, however. Simplistic measurement is that which does not give adequate consideration to elements that deserve it – we would argue that the old system is simplistic because it does not put enough emphasis on cultural quality or public accounts of value. Specific measurement means that we can have clear and consistent ways to articulate quality. It means we can all share in the feedback process and easily talk about the particular things that contributed to a work, like its ability to move someone emotionally or the risk an organisation undertook in creating and performing it. It empowers the sector to communicate their value in terms that best align with their activity and to back up claims about quality with evidence rather than anecdote.
Conversely, specific measurement does not mean that all organisations should be (or are) measured on the same terms, even if they are aiming to achieve the same or similar outcomes – context is important. There are already many smart decision-makers who know that cultural quality does not occur in a vacuum. Things like artform, location, budget, and more should always be considered to generate a holistic picture about an organisation or a work’s value.
Until the cultural sector has clear and consistent evidence about quality, financial data and attendance will continue to be used as proxies for its value. Let’s stop relying on simplistic measures like “bums on seats” and start using specific measures like captivation, risk, and excellence.