Beyond “warm impulses”

I’ve been catching up on the Museopunks podcast series, and a section of March’s installment, the Economics of Free, particularly caught my attention. In an interview, director of the Dallas Museum of Art, Maxwell L. Anderson compares the data that shopping malls collect about their customers to the relative paucity of data that is collected about visitors to the typical art museum. I think it’s worth repeating (from about 18min into the podcast):

[Malls] know all this basic information about their visitors. Then you go to an art museum. What do we know? How many warm impulses cross a threshold? That’s what we count! And then we’re done! And we have no idea what people are doing, once they come inside, what they’re experiencing, what they’re learning, what they’re leaving with, who they are, where they live, what interests and motivates them . . . so apart from that we’re doing great, you know. We’re like that mall that has no idea of sales per square foot, sales per customer. . . so we’re really not doing anything in respect to knowing our visitors. And learning about our visitors seems to me the most basic thing we can do after hanging the art. You know, you hang the art, and then you open the doors and all we have been doing is “hey look there are more people in the doors”.  And the Art Newspaper dedicates an annual ‘statistical porn’ edition of how many bodies crossed thresholds. Nobody’s asking how important the shows were, or what scholarly advances were realised as a function of them, or what people learned, how they affected grades in school. Nobody knows any of that. Nobody knows who the visitors were. So I consider it a baseline. We’re just at the primordial ooze of starting to understand what museums should be doing with this other part of our mission which is not the collection but the public.

I’d argue that we’re a little bit beyond the ‘primordial ooze’ stage of understanding*, although Anderson’s right in that many museums don’t go much beyond counting ‘warm impulses’ (those infra-red people counters). He goes on to describe how the DMA’s Friends program is giving the museum more data about what their visitors do while inside the museum, and how this can inform their engagement strategies (22:45):

This is just another form of research, you know . . . we do research on our collections without blinking an eye, we think nothing of it. We spend copious amounts of time sending curators overseas to look at archives to study works of art but we’ve never studied our visitors. The only time museums typically study their visitors is when they have a big show, and they’re outperforming their last three years, everybody’s excited, and there’s a fever, and you measure that moment, which is measuring a fever. The fever subsides, the data’s no longer relevant but that’s what you hold on to and point to as economic impact. And largely, it’s an illusion.

I find it interesting that Anderson puts visitor research on a par with collection-based research. Often, I get the sense that collection research is seen as ‘core’ museological business, but visitor research is only a ‘nice to have’ if there is the budget. But perhaps this is a sign of shifting priorities?

 

*Historically, most visitor experience research has taken place in science centres, children’s museums, zoos and aquariums rather than museums of fine art. Although there are of course exceptions.

Young Adults and Museums

It’s always exciting when your research data throws up something counter-intuitive. Or at least something that’s at odds with “conventional wisdom” on the subject.

One such piece of wisdom about museum visitors is that young adults (particularly those aged under 25) tend not to visit museums. Population-level statistical data tends to back this up, with a characteristic dip in the 18-24 age bracket (see this graphic from a previous post):

Attendance by age, using figures from Table 1.4 in ABS report
Heritage visitation in Australia by age. Percentage of respondents who visited a heritage site in the previous 12 months (Source: ABS)

Now, here is the age breakdown of the respondents to my visitor survey conducted at the SA Museum as part of my PhD research:

Age Range

Not only are visitors aged under 30 not under-represented, they form the biggest age group I surveyed by a considerable margin! This is a surprising (albeit incidental) finding from my research which makes me wonder what’s going on here. Based on what I observed at the Museum during my fieldwork I have come up with the following hypotheses:

  • Proximity to university campuses. The SA Museum is right next door to Adelaide University and not very far from one of the main campuses of the University of South Australia. I got into conversation with a couple of groups of young adults who indicated they were visiting the museum to kill time between lectures.
  • The backpacker factor: The SA Museum is a popular destination with both interstate and international visitors (more than half of my sample indicated they were visiting the Museum for the first time, and I would wager that the majority of these people were tourists). Among the survey sample, there appeared to be considerable numbers of young “backpacker” tourists (based on my fieldwork observations). Anecdotally, it appeared that younger international tourists were less likely to experience the language barriers of older tourists, which would have prevented them from participating in the study (about 7% of the visitors I approached to complete a survey had limited or no English).
  • Free and centrally located: a few people indicated they were in the museum because it was free to enter and a way of escaping the heat or rain. There were a couple of people who were waiting for someone with a hospital appointment (the Royal Adelaide Hospital is just down the road). Of course, they could have also spent this time in the shopping malls which are just across the road – but for some reason chose not to. So there is clearly some other characteristics of the museum that are attractive to them but which were beyond the scope of this survey. Others appear to have been ‘doing’ the precinct, visiting the Art Gallery of South Australia (next door) as well as the museum.
  • Young parents: A fair proportion of those in the 18-29 age group were accompanying young(ish) children. I don’t know if it’s just me, but I sense there has been a demographic shift between Generations X and Y. Most people of my (Gen X) vintage seemed to be well into their thirties before they settled down and started families. I suspect Gen Ys are having children younger, for a whole range of complex reasons which are beyond the scope of this post. This is just a gut feeling though – I haven’t cracked open the data.
  • Young couples: There was a surprising proportion of young (and highly demonstrative!) couples around. The museum as a date venue?
  • Patterns in the smoke: There is of course the possibility that this cluster is just a random quirk of my particular data set. However, the surveys were conducted across weekdays, weekends and public holidays (but not school holidays) to help control for variation in visiting patterns. My fieldwork observations show nothing to indicate that 18-29 year olds were more likely to agree to complete a survey than other age groups.

In retrospect, it would have been good if I’d been able to distinguish between the under and over 25s by splitting the age ranges the way the ABS do (I had a reason why I didn’t but in any case it’s no big deal). However, I went back to a pilot sample from late last year and found the age spread using different categories was broadly similar:

Pilot Age Range

So what does all this mean? I’m not sure yet. Age is not expected to be a significant variable in my own research, and I only collected very basic demographic information so I had a general sense of the survey population. I’d be interested in how this tallies with other museums though, particularly those that are free as opposed to ticketed entry. Ticketed venues tend to collect more comprehensive visitor data, and we tend to extrapolate from that. But perhaps they are not fully representative of museums as a whole?

Survey Responses – Benchmarks and Tips

I’ve now collected a grand total of 444 questionnaires for my PhD research (not including pilot samples) – which is not far off my target sample of 450-500. Just a few more to go! Based on my experiences,  I thought I’d share some of the lessons I’ve learned along the way . . .

Paper or Tablet?

My survey was a self-complete questionnaire (as opposed to an interviewer-led survey) that visitors filled out while on the exhibition floor. During piloting I tried both paper surveys and an electronic version on an ipad, but ended up opting for the paper version as I think the pros outweighed the cons for my purposes.

The big upside of tablet based surveys is that there is no need for manual data entry as a separate step – survey programs like Qualtrics can export directly into an SPSS file for analysis. And yes, manually entering data from paper surveys into a statistics program is time-consuming, tedious and a potential source of error. The other advantage of a tablet-based survey (or any electronic survey for that matter) is that you can set up rules that prompt people to answer questions they may have inadvertently skipped, automatically randomise the order of questions to control for ordering effects, and so on. So why did I go the other way?

First of all, time is a trade off: with paper surveys, I could recruit multiple people to complete the survey simultaneously – all I needed was a few more clipboards and pencils and plenty of comfortable seating nearby. Whereas I only had one tablet, which meant only one person could be completing my survey at a time. By the time you take into account the time saved from being able to collect far more paper surveys in a given time compared to the tablet, I think I’m still in front doing the manual data entry. Plus I’m finding doing the data entry manually is a useful first point of analysis, particularly during the piloting stages when you’re looking to see where the survey design flaws are.

Secondly, I think many visitors were more comfortable using the old-fashioned paper surveys. They could see at a glance how long the survey was and how much further they had to go, whereas this was less transparent on the ipad (even though I had a progress bar).

This doesn’t mean I would never use a tablet – I think they’d be particularly useful for interviewer-led surveys where you can only survey one participant at a time anyway, or large scale surveys with multiple interviewers and tablets in use.

Refining the recruitment “spiel”

People are understandably wary of enthusiastic-looking clipboard-bearers – after all, they’re usually trying to sell or sign you up to something. In my early piloting I think my initial approach may have come across as too “sales-y”, so I refined it such that the first thing I said was that I am a student. My gut feel is that this immediately made people less defensive and more willing to listen to the rest of my “spiel” for explaining the study and recruiting participants. Saying I was a student doing some research made it clear up front that I was interested in what they had to say, not in sales or spamming.

Response, Refusal and Attrition Rates

Like any good researcher should, I kept a fieldwork journal while I was out doing my surveys. In this I documented everyone I approached, approximately what time I did so, whether they took a participant information sheet or refused, and if they refused, what reason (if any) they gave for doing so. During busy periods, recording all this got a bit chaotic so some pages of notes are more intelligible than others, but over a period of time I evolved a shorthand for noting the most important things. The journal was also a place to document general facts about the day (what the weather was like, whether there was a cruise ship in town that day, times when large numbers of school groups dominated the exhibition floor, etc.). Using this journal, I’ve been able to look at what I call my response, refusal and attrition rates.

  • Response rate: the proportion of visitors (%) I approached who eventually returned a survey
  • Refusal rate: the proportion of visitors (%) approached who refused my invitation to participate when I approached them
  • Attrition rate: this one is a little specific to my particular survey method and wouldn’t always be relevant. I wanted people to complete the survey after they had finished looking around the exhibition, but for practical reasons could not do a traditional “exit survey” method (since there’s only one of me, I couldn’t simultaneously cover all the exhibition exits). So, as an alternative, I approached visitors on the exhibition floor, invited them to participate and gave them a participant information sheet if they accepted my invitation. As part of the briefing I asked them to return to a designated point once they had finished looking around the exhibition, at which point I gave them the questionnaire to fill out [1]. Not everyone who initially accepted a participant information sheet came back to complete the survey. These people I class as the attrition rate.

So my results were as follows: I approached a total of 912 visitors, of whom 339 refused to participate, giving an average refusal rate of 36.8%. This leaves 573 who accepted a participant information sheet. Of these, 444 (77%) came back and completed a questionnaire, giving me an overall average response rate of (444/912) 49.4%. The attrition rate as a percentage of those who initially agreed to participate is therefore 23%, or, if you’d rather, 14% of the 912 people initially approached.

So is this good, bad or otherwise? Based on some data helpfully provided by Carolyn Meehan at Museum Victoria, I can say it’s probably at least average. Their average refusal rate is a bit under 50% – although it varies by type of survey, venue (Museum Victoria has three sites) and interviewer (some interviewers have a higher success rate than others).

Reasons for Refusal

While not everyone gave a reason for not being willing to participate (and they were under no obligation to do so), many did, and often apologetically so. Across my sample as a whole, reasons for refusal were as follows:

  • Not enough time 24%
  • Poor / no English: 19%
  • Child related: 17%
  • Others / No reason given: 39%

Again, these refusal reasons are broadly comparable to those experienced by Museum Victoria, with the possible exception that my refusals included a considerably higher proportion of non-English speakers. It would appear that the South Australian Museum attracts a lot of international tourists or other non-English speakers, at least during the period I was doing surveys.

Improving the Response Rate

As noted above, subtly adjusting the way you approach and invite visitors to participate can have an impact on response rates. But there are some other approaches as well:

  • Keep the kids occupied: while parents with hyperactive toddlers are unlikely to participate under any circumstances, those with slightly older children can be encouraged if you can offer something to keep the kids occupied for 10 minutes or so. I had some storybooks and some crayons/paper which worked well – in some cases the children were still happily drawing after the parents had completed the survey and the parents were dragging the kids away!
  • Offer a large print version: it appears that plenty of people leave their reading glasses at home (or in the bag they’ve checked into the cloakroom). Offering a large print version gives these people the option to participate if they wish. Interestingly, however, some people claimed they couldn’t read even the large print version without their glasses. I wonder how they can see anything at all sans spectacles if this is the case . . . then again, perhaps this is a socially acceptable alibi used by people with poor literacy levels?
  • Comfortable seating: an obvious one. Offer somewhere comfortable to sit down and complete the questionnaire. I think some visitors appreciated the excuse to have a sit and have a break! Depending on your venue, you could also lay out some sweets or glasses of water.
  • Participant incentives: because I was doing questionnaires on the exhibition floor, putting out food or drink was not an option for me. But I did give everyone who returned a survey a voucher for a free hot drink at the Museum cafe. While I don’t think many (or any) did the survey just for the free coffee, it does send a signal that you value and appreciate your participants’ time.

[1] A potential issue with this approach is cuing bias – people may conceivably behave differently if they know they are going to fill out a questionnaire afterwards. I tried to mitigate this with my briefing, in which I asked visitors to “please continue to look around this exhibition as much or as little as you were going to anyway”, so that visitors did not feel pressure to visit the exhibition more diligently than they may have otherwise. Also, visitors did not actually see the questionnaire before they finished visiting the exhibition – if they asked what it was about, I said it was asking them “how you’d describe this exhibition environment and your experience in it”. In some cases I reassured visitors that it was definitely “not a quiz!”. This is not a perfect approach of course, and I can’t completely dismiss cuing bias as a factor, but any cuing bias would be a constant between exhibition spaces as I used comparable methods in each.

(More) Museum and Gallery Visits in England

Back in late 2011 I posted a summary of the latest Taking Part survey of participation in Arts, Sport and Heritage in England. Late last year figures for the period spanning October 2011 – September 2012 were released by the Department for Culture, Media and Sport (DCMS).

Since 2005, when the survey began, these figures have reported an upward trend in the number of people who had visited a museum or gallery in the previous year. For the first time, that proportion has gone above 50%: 51.6% in the most recent survey period. This gets as high as 57.5% in London, with the West Midlands trailing at 48.5%. Despite these variations, all English regions are seeing an increase in visitation.

Online participation is also growing, but still has a long way to go before it catches up to physical visitation – 28.7% of respondents had visited a museum website in the previous year (up from a mere 15.8% back in 2005/6).

Participation rates remain higher in the white and upper socioeconomic demographic groups. However the rise in participation by non-white and non-Christian people continues, with participation rates of 48.4% and 46.0% in the Black and Minority Ethnic and non-Christian religious communities respectively. (This compares with 35.4% and 35.3% in 2005/6). Participation rates are also rising across the socioeconomic spectrum.

Participation rates are the lowest among the over 75s (27%) and those living in socially rented housing (27.9%). However, in 2005/6 the participation rate among social renters was 24.9% – this represents a statistically significant increase. (Participation rates among the 75+ demographic remain steady).

Want to explore further? Summary reports as well as the raw figures (in excel format) are available from the DCMS website.

Exhibition Costs – what’s in *your* budget?

One of the most popular posts on this blog is one from way back in mid-2011: Exhibition Costs – Constants and Variables. Working out what a museum exhibition should cost to develop is *the* FAQ that exhibition planners hear the most. And it’s a subject that’s featured in a recent article by Sarah Bartlett and Christopher Lee [1].

Compared to visitor numbers, data on exhibition costs are hard to come by. Many exhibition projects include, at least in part, contracts with outside design companies and exhibit fabricators, who understandably play their costing cards pretty close to their chest. Most of the available data is therefore derived from informal, self-selected survey responses. Also, whereas it’s pretty easy to agree on what constitutes a ‘visitor’, deciding what counts as part of an ‘exhibition budget’ is not so straightforward. Not everyone factors the same costs into their exhibit budget equation. So even when we do have numbers, it’s hard to tell if we’re comparing apples with apples (a point I have made previously).

Bartlett and Lee cite data from their own informal survey (the full report is apparently available on splitrock studios website but at the time of writing I couldn’t find it) based on 71 responses across the US. Responses were mostly from History / Natural History / Art Museums, with relatively few science centres and children’s museums (this contrasts the survey by Mark Walhimer that I reported in my earlier post, where more hands-on style museums were represented). In the Bartlett and Lee sample, the vast majority of exhibitions came in under US$100/sq.ft., although some were over 10 times this amount. This is reflected in the average cost for science / technology museums and visitor centers being over US$500/sq.ft.

But that’s far from the whole story, because survey respondents didn’t spell out what costs were included in their $/sq.ft. figure – and what was below the line. Bartlett and Lee identify the following categories that may or may not be included in an exhibit budget, to which I’ve added a few additional comments:

  • Differing base fit-out costs of a new versus existing building (to which I’ll add the complexities of working in heritage buildings or in spaces with limited access, which increases on-site costs).
  • Basic finishes such as painting, track lighting and flooring (it’s amazing how much it can cost just to get a space up to an ‘inhabitable’ standard!)
  • Changes to building infrastructure like new walls or electric/data cabling
  • Preparatory costs, such as research, planning, design and management fees (to which I’ll add formative evaluation costs)
  • Audiovisual and Electronic interactives (I’d say it’s worth adding that this could also include software and devices that are not strictly part of the fit-out such as apps or audioguides)
  • Staff costs, both in-house and contract personnel (I’m aware that it’s often hard to quantify the amount of in-house time spent on an exhibition, particularly if staff are not required to quantify hours spent. Anecdotally I’ve heard there is often internal resistance to the idea of doing such quantifying as staff feel they’re being ‘checked up on’)
  • Maintenance costs (to which I’ll add other post-opening costs such as snagging, consumables, staffing costs and of course summative / remedial evaluation!)
  • Object-related costs such as conservation, packing and loan negotiations.

With this many variables in what people routinely count as part of the exhibition budget, it’s easy to see how you can get variations that vary by an order of magnitude or more. The most basic of exhibitions may draw upon the existing collection and not involve any changes to gallery infrastructure such as lighting, painting or display cases, and staff costs may be absorbed into day-to-day operating budgets rather than costed out. At the other extreme are highly media-rich and interactive exhibitions that involve considerable research and hiring in of a multitude of outside experts. Bartlett and Lee also identify geography as a factor in costs, at least across different parts of the US.

1. Bartlett, S., and Lee, C. (2012). Measuring the Rule of Thumb: How Much to Exhibitions Cost? Exhibitionist, Vol 31(2), pp 34-38.

Children in Museums and Galleries

The latest version of the Australian Bureau of Statistics’ report Children’s Participation in Cultural and Leisure Activities has recently been released. This report shows participation rates in a range of cultural, recreational and sporting activities by children aged 5-14 inclusive. I’ll focus on museum and gallery visitation here, although for comparison I’ve included public libraries and performing arts attendance in following table:

Children’s attendance at cultural venues and events in preceding 12 months (Source: ABS)

This shows that 70% of children aged 5-14 attended at least one library, museum, gallery or performing arts event in the preceding 12 months. Note the figures pertain to activities undertaken outside of school hours, so this does not take into account school visits. The increases in performing arts and museum and gallery attendance are statistically significant, at least when comparing 2012 to 2006.

The following table breaks down the frequency of attendance among participants:

Frequency of attendance (in past 12 months) among participating children (Source: ABS)

Frequency of participation in museums and galleries is comparable with that of performing arts among all age groups, whereas visitation to libraries is more frequent (which makes sense given the nature of library use).

Museum and gallery attendance is not uniform throughout the community, however. Children from non-English speaking countries and non capital city residents are less likely to attend. There are also slight variations according to gender and age bracket, as well as differences by state of residence. The state differences may be at least partly explained by the fact that some states have more of their population concentrated in capital cities than others.

Source: Children’s Participation in Cultural and Leisure Activities, Australia, Apr 2012 (ABS)

It’s hard to compare these figures directly to the ABS figures for adults, as museums and galleries are reported separately for adults. (Recent ABS statistics for museum and gallery visiting are reported here.)

The shifting sands of state heritage funding

(N.B. This is another of my blog posts comparing the 2010 and 2011 versions of the ABS report Arts and Culture in Australia: A Statistical Overview. The first one is here.)

In this post, I’m looking at government funding of arts and heritage and comparing it to 2010’s effort. Again it’s a complex and slightly confusing picture, not least because some of the figures reported for the 2008-2009 year do not always tally between the respective reports. For the purpose of this post, I’ve used the figures cited in the 2011 report wherever possible.

First the overall picture of Federal, State and Local government funding:

Comparison of state funding over the 2008-2009 and 2009-2010 years (Source: ABS)

As with my 2010 post, I’ve made funding increases over 6% green, and decreases over 6% red.

Cultural heritage and ‘other’ museums had a significant funding boost in 2009-2010, particularly at Federal and Local levels. This appears to have been at the expense of large cuts to Federal environmental heritage funding  (local government funding of environmental heritage is not provided in the report). Meanwhile, there has been an increase of state funding of art museums, partially offset by a cut to local government funding in this area.

As with last year, there are significant (and inconsistent) year-on-year changes in funding at state level:

State by state breakdown of heritage funding: 2008-2009 & 2009-2010 (Source ABS)

The big increase in Art Museum funding in NSW appears to be a return to ‘usual’ funding levels, since the 2008-2009 amount was a 32% decrease from 2007-2008 (see table in previous blog post). The drop in Art Museum funding in Qld is also in the context of a far larger increase in the previous year. The ACT has had large funding increases across the board (again balancing cuts from the previous years in some instances.)

It’s possible that state funding cycles are highly variable when looked at on a year-by-year basis like this, hence the erratic numbers – perhaps comparing three-year averages might give a more clear picture of what’s going on.

Another point to note is that while state funding of environmental heritage is relatively static in the aggregate, the individual state breakdowns show some clear winners and losers. I should point out that no states saw cuts to environmental heritage last year, and a couple had reasonably large increases, so the funding picture for environmental heritage may not be as bad as it first appears. However, when taking the federal funding into consideration too, it does look like environmental heritage has had a pretty severe funding blow.

Who’s visiting (now)?

Late in 2010, I wrote a series of posts based on the Australian Bureau of Statistics report: Arts and Culture in Australia: A Statistical Overview, 2010.

It looks like the 2011 version of this report was issued just before Christmas, although I only found out about its release a few days ago. So I thought I’d look at the 2011 report and compare it to the 2010 figures I blogged about previously, to see if there are any interesting changes (or conversely, evidence of stable patterns).

The first post I’m revisiting is Who’s visiting?, which looked at participation rates by age. (‘Participation rate’ is defined as the person having attended that kind of venue least once in the previous 12 month period). Now it looks like the participation rates shown in the 2010 report  were based on data from 2005-2006, whereas the 2011 report has more recent figures (2009-2010). So what has happened to participation rates over the past five years?

Firstly, let’s look at the overall participation rates from each year (NB: the ABS report also includes libraries, archives and performing arts, but these are not included in this analysis):

Attendance rates at Australian cultural venues (people aged 15 or over), as a total figure and as a percentage of the population (Source: ABS).

So it appears that participation rates are increasing across the board, albeit modestly (and the report does not say whether this increase is statistically significant or not). This increase appears to be spread across the age ranges:

Attendance rates at cultural venues by age group, comparing 2005-2006 and 2009-2010 (Source: ABS)

So there is no radical change in any particular age group, and the patterns of participation follow broadly the same patterns in both 2006-2006 and 2009-2010. Similarly to previous years, the report also showed that women are still slightly more likely to be visitors than men. So there is nothing earth shattering, but perhaps there is something to be quietly optimistic about if the increased participation rates are evidence of a slow and steady trend.

 

 

Exhibition Attendance Figures: The Art Newspaper

If you’re wondering just how big those blockbuster crowds are, then the annual summaries published by the Art Newspaper are a great place to start.

These comprehensive lists show both overall and daily attendance figures for (predominantly) art exhibitions around the world, ranging from the 10,000+ daily visitors to the mega-blockbusters to far more modest ventures. There are also top ten list by category, such as “Decorative Arts” or “Impressionists” or “Antiquities”.

There are several years of data available online, going back to 2003 (links posted below):

That represents a fair data set for spotting longer term trends or changes over time. Looking at this data, I was interested to see how often exhibitions in Japan top the list, with exhibitions in Tokyo and elsewhere often dominating the top ten. This is in contrast to the most visited museums overall, which are predominantly the big institutions in London, Paris, New York and Washington.

Looking at the Australian scene, the top exhibition for 2010 was the 6th Asia Pacific Triennial at Queensland Art Gallery (34th overall).  Both Queensland Art Gallery and its sister institution, GoMA, managed more than one entry in the top 100 as well as additional entries under specific categories.  Other Australian exhibitions to make the list were Masterpieces from Paris at the National Gallery of Australia and the Tim Burton exhibition at the Australian Centre for the Moving Image.

The 6th Asia Pacific Triennial attracted some 4,400 people a day. That sounds like quite a crowd, until you compare it to the top exhibition overall – Hasegawa Tohaku at the Tokyo National Museum – which attracted over 12,000 people per day!

Benchmarking Museums: Online & Onsite

If you’re interested in which museum is doing what in social media, then you must check out Museum Analytics.

It describes itself as “an online platform for sharing and discussing information about museums and their audiences”. So far there is data for over 3000 museums, including some of the world’s most famous such as MoMA, the Louvre, Tate, and the Smithsonian. (But it’s not just the big global museum brands – I counted at least 20 Australian museums, ranging from the major institutions to a wide range of small and regional ones).

The site lists the most visited museum websites (Metropolitan Museum of Art by a fair margin in 2010 it appears) and the top Facebook likes and Twitter follows. Museums are also individually listed and you can see what’s happening closer to home – for instance these are the summaries for Australia and Adelaide respectively. But it’s not just website and social media – the site also has numbers for onsite visitors as well (although it is the data about online activity that makes this site stand out).

On the topic of museum statistics, there has recently been quite a lively discussion on the ICOM Group on LinkedIn (list members only but it’s easy to sign up) about the information and statistics collected by governments and other bodies around the world. If you’re interested in comparing and contrasting museum statistics from around the world (or even comparing which data are collected, by whom, and why), then I suggest you sign up.

One resource I was directed to from the ICOM discussion was a Culture 24’s project about how to evaluate museum success online. You can download a detailed report about the research project as well as tools and metrics for evaluating online and social media presence. It’s a must if you’re getting to grips with tools like Google Analytics or just wondering how best to track your online presence.