What happened with "What Happened Next?" survey | An Eventful Life

What happened with "What Happened Next?" survey

 

You may remember that at the start of the year we ran a “What Happens Next?” survey featuring 20 eventing video clips for you to guess the outcome.

The survey will form part of my PhD study co-sponsored by Nottingham University and An Eventful Life and we thought we'd give an update on what data we’ve collected, and what we plan to do with it next.

It is important to note that this is raw data, which gives an idea of this specific sample of the population, but detailed statistical tests are still ongoing.

We had a total of 2631 complete responses between 23rd January and 28th February 2020. There was an excellent spread of ages, from 18 year olds to over 65’s taking part. Unsurprisingly, the overwhelming majority (94%) were female. 

We had responses from all over the world, with a large proportion of the responses coming from the US, the UK, and Australia. Around 25% were employed in the equestrian industry in some way, with the most common professions being instructors, trainers, or coaches. Riders, veterinary technicians and yard managers were also prevalent.

 

 

Almost all of the respondents were horse-riders, or had regularly ridden a horse in the past, but we did get 99 participants who had never ridden a horse before. Of the 2532 riders, 74% had evented at some level and we even had some 5* riders take part.

The majority of respondents said that the highest level they had competed to was 110cm (BE Novice). Perhaps this is due to the limited number of venues offering tracks above Novice level? I’d be interested to hear any thoughts on this.

The highest score out of all the participants, was 18/20, which was actually achieved by an Australian BE90 rider, so whoever you were – well done! Most people scored between 5 and 11 out of 20, with the mean average being 8.3/20.

 

 

A lot of comments were made regarding how difficult it was to distinguish between a refusal and a run-out, so even if you had correctly guessed that the horse would stop, you may have picked the wrong “type” of stop. The difficult thing about research is that once you have collected the data, you cannot go back and collect more detail on that same set of data, so it was important that we collected enough detail in the first place. This is why we gave you all five options – clear, refusal, run-out, rider fall, and horse fall.

If you hadn’t already realised, there were 10 clear video clips, and 10 not-clear video clips. So we have grouped the refusals, run-outs, and falls together as “not clear”. This means that even if you only got 5/20, you may actually have scored much higher when the options are grouped into clear or not clear. My apologies if this frustrated you, but this was research first, and entertainment second, so it was necessary to provide you with sufficient options so that the data was as detailed as possible.

Another comment made was that the task was unnecessarily difficult because the clips were cut too early. The purpose of this research was to see whether there are specific groups of people who are better at predicting the outcome than others, because if there are, then we can learn from them.

The point at which the clips were cut was based on data we collected in a pilot study. It needed to allow you to see enough of the clip to make an informed decision, but not enough that everyone would get 20/20. Making it too easy would have been really damaging to the study and, having seen the spread of the results, I think we pitched this at just the right level.

There were some discrepancies in the data; for example, 1865 people said they had evented at some level, but when asked which disciplines they had taken part in, only 1639 people ticked the box for eventing. This is typical of survey data, and highlights the importance of clear and concise questions. It is possible that some respondents had evented once or twice, but didn’t consider themselves to have “taken part in the sport of eventing”, and therefore didn’t tick the box. We’ll never know for sure, but we will learn from this for next time!

 

 

Another small issue with the data is the fact that it is quite unbalanced. There were unquestionably more females taking part than any other gender, and we only had a relatively small number of non-riders. This is the nature of survey data – it is not particularly precise, so it should be considered that these results apply only to the sample we had available.

The aim of this study was to see whether this SPECIFIC collection of people could predict the outcome of this SPECIFIC collection of video clips. If we find that they could, then we can start working out HOW they did it, and whether we can apply the findings to other groups of people, other video clips, and other scenarios. This is only a small part of a much larger project, and we hope to bring you further findings over the next couple of years.

 

Thank you to everyone who took part, I hope you enjoyed it!