Announcing the 42 artist pairs in November was hugely exciting. We relinquished more control than ever in this process and are genuinely delighted with the collaborations it has led us to, and the potential of their proposals. Seeing it through has been both joyful and nerve-wracking. Below we share our first insights into what we’ve learnt.
In trying something different we hoped to spark conversations and critique to enrich our learning and open out the issues raised. We’re grateful to everybody who has taken the time to engage with this experiment and share their feedback with us, and the writers of the four unique perspectives which we are publishing today.
In this blog, we’re sharing our first reflections and the questions we’re still answering. It includes responses from 178 applicants (out of 856 applicant pairs, total 1712 individuals) via our feedback survey, which they received a week after the application deadline. While this is a self-selecting sample subject, it represents around 32% of the total individuals who engaged with the fund and provides us with a clear applicants’ voice and insight into their experience to support our evaluation of ‘what worked’.
When we launched the 1:1FUND in September we shared our motivations behind it in a Jerwood in Practice blog which you can read in the connected Reflections and Resources below. With these in mind, we wanted to explore the role of randomness and collaborative applications, and the possibility of a process which asked for far less information from applicants. As a pilot we never intended it to replace our standard grant making approaches – including using external selection panels, providing feedback to all unsuccessful applicants who want it, and employing positive action in our selection processes – but rather to explore our beliefs about these, and potentially provide us with some new tools and ways of working.
We were mindful in doing so that each application form would have less human engagement than usual, and so we hoped to reduce the labour of applicants as far as possible to reflect this. The fund was designed with collaboration in mind to counter the isolating impact of Covid-19 on artist practices, but we were also interested to understand how it would feel to apply as a new collaboration, and whether the application process itself might generate conversations about new or untested creative partnerships.
Too many changes at once?
A key learning from this process was that, in our desire to innovate, we may have changed too many things in one go. For the 1:1FUND we:
- Introduced a random number generator as the main tool in the selection process
- Designed a significantly shorter application form, with an emphasis on eligibility questions and Yes/No answers
- Introduced collaborative applications for the first time
- Refined the criteria for the length of practice we can support at Jerwood Arts
With so many new elements to pilot, and a desire to both simplify and innovate the process simultaneously, the overall process became unhelpfully complicated for applicants and us at times, and made it harder for us to apply the level of care we would have liked. It has hampered our evaluation: changing so many core elements at once has made comparisons with our other opportunities largely meaningless. For example, we cannot easily disentangle the factors that may have influenced who applied when we changed how to apply, who can apply, the size of the fund, and the selection process. Nonetheless, the feedback we have received and our internal reflections do allow us to gain some insights and focus on questions for future opportunities.
What’s the right balance of information required and unpaid labour of applicants in the application process?
Our usual application processes develop more of a relationship with those applying, including offering advice on how artistic concepts and ideas are framed in relation to the funding opportunity. This time our interaction with just a small number of applications was based entirely on eligibility, expressed mainly in their ‘Yes/No’ answers. In collecting such little information, we had to take the information we had as read and this didn’t allow for much nuance in our decision-making. While we were glad to try something different, it was clear to us that we needed more information in some cases to make a fair assessment.
There were benefits to a shorter form, such as less unpaid time spent on applying: 76% of applicants spent less than a day, with 35% spending 1-3 hours. This is significantly shorter than any other application form we’ve designed. Our learning here has inspired us to think about two-stage processes, where the first stage requires very little information. It has also reinforced the importance of fully interrogating what questions we ask and why, to balance applicant labour and our ability to make fair and rigorous decisions.
Equality, Access and Inclusion: Can randomness remove some of the barriers to applying for funding for groups currently under-represented in who applies to us? How might random selection affect representation, access and diversity of who is selected?
The 1:1FUND had a higher number of applicants (856 pairs, 1712 individuals) and selected individuals (42 pairs, 84 individuals) than any other opportunity we offered in 2020/21. First, some statistics from the feedback survey: 81% of respondents were applying to Jerwood Arts for the first time, a higher proportion than usual. We were reassured that 95% knew the selection would involve a random number generator, 94% felt our guidance was good or very good, and 89% felt the language used was clear and easy to comprehend. While more than half (54%) said that the random number generator had made them ‘more’, or ‘a lot more’ interested in applying, almost a quarter (24%) said it made no difference at all, and a sizeable minority of 20% said it made them ‘less’, or ‘a lot less’ interested in applying. Our experience was that the random selection did divide opinion, with those less keen more vocal and passionate about their reservations than the majority who were in favour.
In terms of diversity, the applicant pool was broadly in line with the average demographics of Jerwood Arts applicants for the past 18 months, although individual funds have varied significantly within that. The random selection maintained the diversity of the fund’s applicant pool – given the nature of randomness, this was not a given and we acknowledge that there is no guarantee that using random selection would produce a result that matches or improves the diversity of the total pool of applicants. An ‘extreme outcome’ was just as likely and would have posed a very uncomfortable result. All advice is that you would have to run a random selection process regularly over time to start to generate a representative result. Nonetheless, the selection can only ever include the diversity of the applicants, and therefore we put significant efforts into ensuring a wide range of artists, curators and producers were aware of the opportunity.
- Geography – 41% of applicants were based in London, lower than the Jerwood Arts average of 45%. Random selection increased this to 48%.
- Age – 64% of applicants were under the age of 34, exactly average for Jerwood Arts. Random selection maintained this at 63%.
- Ethnicity – 28% of applicants identified as from an African, Caribbean, LatinX, South Asian or East Asian background. Random selection maintained this at 29%. The Jerwood Arts average is 33% for the applicant pool and 41% in the selection.
- Sex, Gender and Sexual orientation categories were consistent with Jerwood Arts averages across both the applicant pool and random selection.
- Disability – 18% of applicants identified as disabled, in line with the Jerwood Arts average. Random selection was 17%; the Jerwood Arts average across our awards is 23%.
You can also find out more about the data from the 1:1FUND and Jerwood Arts other opportunities over the last 18 months.
What can we learn about bias, experience and expertise in our selection processes from trying to remove them?
In terms of the selected artist pairs, they are almost all entirely new to us. This is, without a doubt, the most exciting positive of the pilot: it has brought us into contact with artists and practices we did not previously know and raised important questions for us about the role of network bias in our decision-making. The random selection and awarding based on eligibility elements meant that artists whose work we were already familiar with could not be prioritised. The experience made us think more deeply about where this shows up in our wider assessment processes. Even where we use external assessors with distinct expertise and knowledge bases within the arts, our experience from other selection processes is that many times there is a tendency to vouch for those whose work is more visible or familiar as opposed to those they are meeting for the first time in an application. Removing these qualitative comparisons and the biases within them was one of the most interesting advantages of the process and opened-up further important questions for us in reflecting on our usual approach that we will take forward.
How would the focus on collaboration and request for both parties to apply together feel to applicants?
This fund was unusual in that it invited artists to apply with a collaborator, and not as an individual practitioner. We were interested to understand how the extra ‘step’ of identifying a collaborator was received.
Applying with a collaborator made 65% of applicants more confident to apply, and only 3% less so. About half of collaborative pairs had made professional artistic work together previously (52%) and regardless of the outcome of their application, 87% said they were likely or very likely to work together in the future. Only 20% had met each other during the past year of the pandemic, and 48% had known each other between 3 and 10 years. We know that strong networks and collaborative partnerships are more commonly a feature of more privileged and established artists, so while we recognise that many responded very positively to this joint opportunity, there are considerations we didn’t make about how this disadvantages those with fewer opportunities to connect which we would want to consider in the future.
In running this pilot we have achieved what we set out to, taking us outside our comfort zone and learning a huge amount. We have identified 84 brilliant artists working together across art forms and all over the UK, who we might not otherwise have. It has given us the confidence to change our processes more radically, while continually listening to artists about where the systems are flawed and where they are working. We understood from the outset that this would be an imperfect process, but in running it we have learned more about the possibilities of random selection. It’s also made us consider where this deceptively simple tool might come into play for us in the future in a way that enables us to maintain our connection to applicants and leaves space for nuance and care in our application and selection processes.
Lilli Geissendorfer, Jon Opie and Sarah Gibbon
Below are some of the other sources we took inspiration and research from to inform the 1:1 FUND.
Random selection is used in science funding across the US and Europe. For example, Switzerland’s national science funder awards grants by lottery to eliminate bias between proposals of similar quality, and the Volkswagen Fund has been running a partially randomized process as part of a programme to experiment with funding approaches for a number of years. In the US, the case for random selection in science funding was powerfully made in an influential paper by Fang and Casadevall (2016) who argue that funding rates in the US were now so low – at both the National Science Foundation and the National Institutes of Health (NIH) – that it was no longer appropriate to allocate funds by ranking applications through a peer-review process, which biased selection towards more experienced, well-connected, male applicants.
The idea in science funding is that new breakthroughs in research (akin to identifying and nurturing artistic talent?) are unlikely to be discovered only by peer review and ‘informed, rational choices’. This is because scientists recognise knowledge is always partial, and ‘rational choice theory’ is deeply flawed. Therefore using (some) element of randomness ensures a greater chance of successful new discoveries.
Random selection and democracy:
The Malcolm Gladwell podcast cited by Kelly Best and Georgie Grace in their commissioned text provided another rich vein of research. He talks about his investigation into use of lotteries in student elections and how they suggest a potentially revolutionary way forward for democracy on other podcasts and interviews, including this Big Think YouTube video. The approach sees anyone able to stand for election and representatives picked from a proverbial hat, and makes clear the qualities needed to campaign and win elections are very different from those needed to govern fairly. The ideas underpin the Democracy in Practice project, and the non-profit OfByFor.
Academic research on social mobility:
Work by academic Lee Major, Professor of Social Mobility at Exeter University in the UK on the potential power of lotteries to created fairer outcomes and combat meritocracy, was also a rich source of inspiration. Among other ideas, he advocates for lottery approaches to allocate university places to reduce educational inequalities. Academics in the US such as Peter Stone have looked at the role of lotteries in good and bad decision making, and Natasha Warikoo advocates lotteries for admission to elite institutions such as Harvard.