Hard Truths about Arts Data

I’ve been happily occupied in concert land of late, but a couple of weeks ago I had a chance to try on the arts researcher hat for a spell. Below is the resulting essay, relating the problems encountered by finding the truth in the hard sciences to the nascent world of arts research:

Hard science seems to be getting flabby. From the social sciences to pharmacology to development economics, truths that had been proven and reproven through scientific studies are, upon later examination, turning out to be false.

Among other factors, the problem appears to stem from a phenomenon known as publication bias, in which journals tend to only publish studies that support certain theories. As Holden Karnofsky explains in a post on GiveWell.com:

publication bias can come both from “data mining” (an author interprets the data in many different ways and publishes/highlights the ways that point to the “right” conclusions) and the “file drawer problem” (studies that do not find the “right” conclusions have more difficulty getting published).

In a New Yorker article, Jonah Lehrer explains that after enough studies drawing the “right” conclusion, “the peer-review process is tilted toward positive results. But then, after a few years, the academic incentives shift—the paradigm has become entrenched—so that the most notable results are now those that disprove the theory.” This is how yesterday’s truths become today’s discredited theories, a phenomenon scientists call the “decline effect.” This shifting ground has cast doubts on established drug treatment protocols, and on best-practices in other fields that seek data-driven solutions to complicated problems, such as humanitarian aid and ecology. 

Cries of publication bias have not yet roiled the arts research community. Of course, arts research is in its infancy compared to the social and physical sciences, and arts researchers do not need to confirm accepted paradigms to win grants and fuel their careers. But as anyone who has written a report on the use of a grant will confess, data mining is rampant.

Non-profit arts organizations, with funding on the line, feel compelled to report on their successes and overlook their shortcomings. If an organization fails to meet its goal of say,  serving 1,000 audience members, it can report on another metric, such as units of service provided, or describe qualitative features that make their work unique. In addition, because non-profits usually do not have dedicated resources for program evaluation, the data they do collect is often unreliable. An artist-led chamber ensemble that does not operate its own box office may only be able to eyeball its audience to determine how many people they serve, much less find out who they are and if they have repeat customers. In the arts, poor data collection has a similar effect as data mining as people collecting the data will be more inclined to seek positive results to report.

These factors have two consequences: funders interested in measuring their own effectiveness do not receive compatible data sets to compare, and non-profits have a disincentive to accurately measure (and therefore be able to improve) their work. Faulty data from the point of service level informs policy, which can lead to funder and grantee initiatives whose effectiveness in supporting broader goals in the field is uncertain. Before we can address the myriad qualitative effects of the arts (E.g., what is the impact of a lifetime of playing an instrument?), or seek solutions to perceived problems (the lively arts are losing ground to technology), we must accurately collect large amounts of quantitative data that establish basic attributes of our cultural activities.

As other research communities have grappled with the problems posed by publication bias, they have also posited solutions. Writing in his blog, Aid on the Edge of Chaos, Ben Ramalingam cites Daniel Sarewitz’s recent article in Nature, which examines the challenges posed by publication bias. Sarewitz, Ramalingam notes, suggests that one recourse for such bias is that society can push back against the medical community when it produces “useless results.” Ramalingam hopes that humanitarian aid recipients can begin to leverage “new technologies and feedback processes” to push back against policies that cause more harm than good. But he does acknowledge that aid recipients have “severely limited” opportunities to push back. The same is true for non-profits that similarly depend on institutional funding.

Karnofsky offers another solution: “large-scale, general-purpose open access data sets.” He explains that “an effort that aimed specifically to create a public good might be better suited to maximizing the general usefulness of the collected data, and may be able to do so at greater scale than would be realistic for a data set aiming to answer a particular question.”

The general-purpose data relevant to the arts community might involve identifying how many people are served by an organization, and how many become repeat audience members. If funders required accurate reporting of how many people were served by an arts organization – with consequences for inaccurate information – non-profits would be forced to track their efforts more accurately. Funders could also make clear that they are more interested in accuracy rather than high numbers, and that the number of people served would not affect future funding decisions.

In the recent Grantmakers in the Arts Reader, Amy Kitchener and Ann Markusen acknowledge that small arts organizations have limited means to accurately gauge their efforts. They suggest that rather than requiring uniform data from all grantees, funders sit down with arts non-profits for “interactive interviews” and “ask what they think their accomplishments have been in their own terms.” This, however, would only be business as usual, not the development of more rigorous evaluation standards.

It may seem unwise to place the burden of data collection on arts administrators already busy carrying out their programs. But with some training – and an imperative from funders –  staff can learn to incorporate more accurate quantitative data collection into their program design. Savvy development departments collect minute details of their donors’ lives. Organizations can similarly learn about their audiences too.

About thousandfoldecho

Everyone likes classical music. Not everyone knows it yet.
This entry was posted in Uncategorized and tagged , , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s