Coalition for Agility Reporting on Incidents and Safety (CARIS)
- Arielle Pechette Markley, DVM, DACVSMR
- Mar 28
- 8 min read
Updated: Mar 30

Coalition for Agility Reporting on Incidents and Safety (CARIS)
This initiative is designed to provide the first worldwide collection of evidential data to aid organizations in understanding the associated risks with the sport of dog agility in its various formats. The start point for this first research project of its type is to collect data on contact obstacle performance namely, the A-Frame, Dog Walk and Teeter / See Saw.
Motivation for the Project
In our last blog post we reviewed all of the current research data about obstacle incidents. The primary conclusion is that we don’t know what the true incident rate is for different agility obstacles. Having a better estimate of the true risk, both of any fall and of a fall-related injury, are critical pieces of decision making about any future risk mitigation strategies.
The need for current and comprehensive data has prompted an unprecedented collaboration between agility organizations worldwide. Currently the following organizations are committed to collaborative data collection on agility incidents: American Kennel Club (AKC), Australian Shepherd Club of America (ASCA), Canine Performance Events (CPE), Fédération Cynologique Internationale (FCI), North American Dog Agility Council (NADAC), Royal Kennel Club (RKC), UK Agility International (UKI), United States Dog Agility Association (USDAA). This is the first time in history that all of these organizations have come together to pursue an evidence-based approach to agility safety. Our Canine Sports Science Consortium (CSSC) research team was recruited for assistance with data collection and analysis and has been working tirelessly with the organizations to ensure the best data collection possible. We recognize that this collaboration presents a rare opportunity to collect an immense amount of data and care must be taken to ensure the data is valid and usable for organizations to be able to make appropriate decisions. This blog post will review the background on the project, decision-making processes, what data will be collected, how it will be collected, and how data will be analyzed, for those of you interested in an inside look into the future of agility research.
The initial collaboration was instigated by the increasing public concern for dog walk safety. With many competitors understandably voicing concern over dog walk falls, because such falls are dramatic and can lead to serious injury, there has been much discussion about how to make the dog walk safer. Some suggestions include changing the dimensions of the dog walk (wider, lower, or both). However, a crucial piece of information is missing from the discussion about dog walk risk and the potential impact of any of the proposed changes: “how many dog walk falls are there currently?” We need that information to both assess the current risk in context and to determine whether a change to the dog walk actually decreases the risk of falling. (See our first blog post)
Without this information, let’s say we widen the dog walk and discover that there are about 5 falls per 100 dog walk performances. Well, is that lower than before? Is it the same? Or did we make it worse?! The very last thing that anyone wants is for organizations to make an un-researched change and have an increase in accidents and injuries.There are examples of where seemingly “obvious” changes made in the name of increased safety have had unanticipated effects leading to increased risk in multiple sports and across multiple species. Luckily, the agility organizations all agreed that more data was needed before making any major changes, and the starting point would be to determine how many dog walk falls there are currently in competition.
The initial plan was to collect data just on dog walk falls during competition. However, only collecting data on dog walk falls does not provide a perspective on how dog walk incidents compare to incidents related to other obstacles. We discussed what incidents should be recorded - all incidents on the agility course? Just obstacles? Which obstacles? While our CSSC team is involved in data collection and analysis, the real burden of this research project falls on the judges. Having judges remember and record any slip on the course, any crashed jump, any delayed tunnel exit, would be a huge ask and interfere with the ability to do their primary job. Therefore it was decided that we would collect data only on incidents related to the three contact obstacles (dog walk, A-frame, teeter/see-saw). Because incidents with these obstacles occur (we think) fairly infrequently, the judges won’t have as much to record, and the incidents associated with these obstacles tend to be more noticeable, making it more objective particularly in the absence of video review.
Defining incidents
In order to standardize data collection, we are asking judges to report on any instances when the dog has an “unexpected exit” from a contact obstacle. The types of unexpected exits we anticipate include the dog jumping or falling prior to obstacle exit, the dog face planting or stumbling badly on exit from the obstacle, or the dog missing the contact zone and exiting from such a substantial height above the contact zone so as to be classified as a “fall.”
Of note, typically “missed contacts,” where the dog misses the contact area but otherwise appears in control of its body on exit, will not count as “unexpected exits.”
Any time an “unexpected exit” occurs, we will ask the judge to complete a short survey as soon as possible after the incident. This survey will include a description of the incident, very basic information about the dog and context (jump height, type of course, etc). It also asks the judge to select from a list of possible factors that they believe may have contributed to the incident.
Calculating incident rate
Collecting data on every incident is only part of the data needed to calculate the true incident rate. In order to calculate a rate, we also need to know how many attempts were made in total. For this, we ask judges (or trial hosts/secretaries) to complete an end of trial survey that asks about the total number of runs with each of the three contact obstacles.
Once we have this data, we can calculate the obstacle incident event rate for each trial as the number of reported events divided by the total number of runs at that trial.
Note here that for standard agility courses, the dog is almost always expected to complete each contact obstacle once on the numbered course. Therefore, for these courses, if we know there were 3 A-frame incidents on a standard course over a two-day trial and there were a total of 195 agility standard runs, the incident rate would be 3 per 195 runs, which is also likely pretty close to 3 per 195 A frame attempts (1.5 per 100 attempts).
Most of our participating organizations also offer games classes, where part or all of the course may be selected by the handler to accumulate points, rather than completing a pre-numbered course.
For games classes, if a contact obstacle is available, it may be attempted a variable number of times by different competitors, depending on their individual strategies. Thus, if we know there were 2 teeter incidents in 105 games runs, we cannot know how many actual teeter attempts there were. However, we elected to collect information about contact obstacle incidents in games classes as well, because if the obstacle incident rate per run is dramatically different between the standard numbered course and games that allow contacts, this might provide some insight as to factors that influence falls.
With these two pieces of information – data on each unexpected exit from a contact in trial, and the number of runs at each trial, we will have excellent data for estimating the true trial incident rate for each of the three contact obstacles to be studied.
One potential limitation is the possibility that judges will not report every incident or that judges/ trial hosts may not complete end of trial surveys for trials where no incidents occurred. Both of these possibilities would lead to biased estimates of the true event rate, one leading to an underestimate and the other an overestimate. Our hope is that the agility community will embrace the scientific discovery process and support judges reporting incidents for accurate data, even if it leads to brief trial delays.
Given that we think incidents are relatively rare, we elected to attempt data collection from all judges and all trials, such that we will have better statistical information about both the overall rate (which is a challenge to estimate if it is low) and about the variability of the rate in different situations, which will provide preliminary potential information about risk factors.

Preliminary risk factor data
We would love to say that this data collection effort will also lead to clarity around factors that lead to falls, particularly the ones that are most likely to injure dogs. However, scientific evaluation of risk factors is quite complex, particularly when the causes of obstacle falls are likely multifactorial.
One major challenge when evaluating possible risk factors is the need to compare incidence rates, rather than comparing the absolute numbers of incidents. This is challenging because in order to accurately estimate rates, we need both the numerator (the number of incidents) and the denominator (number of runs). So if we wanted to compare the rate of dog walk incidents when the dog walk was in the first half of the course compared to when the dog walk in the second half of the course, we would need to collect this information not only from dog walk fall incidents but also from all courses. The same is true for looking at factors such as breed, handler experience, handling miscommunications, surface, weather, etc. Collecting this information would be possible with dedicated research staff to attend trials and collect such information (or collect information from video review). However, this first project is relying on judges and trial hosts to contribute their time to data collection, and as such, we wanted to streamline data collection as much as possible.
We are asking judges / trial hosts to voluntarily provide additional information on the trial site (surface, obstacle colors, indoor/outdoor location) as these are factors that have been hypothesized to impact incident rates and are relatively easy to collect for both incidents and non-incidents. Likewise, as noted above, we will be able to compare incidence rates between standard agility classes and games classes (on a per run basis).
However, we want to be transparent that several hypothesized risk factors, such as course design, dog speed, handling strategies, and dog experience level, will not be comprehensively evaluated in this study. As noted above, we will collect information on each incident about factors the judge felt may have led to the unexpected exit, but these factors will be useful descriptively and to identify possible factors for future studies rather than definitive conclusions.
Next Steps
Data collection will start on April 10th and continue for 6 months for all organizations. Following data collection, the incident rates for each of the three obstacles will be calculated, split by games vs. standard class. Assuming good participation with our voluntary questions, we will also report incident rates for each obstacle by surface and obstacle color. Descriptive statistics for the incidents on each obstacle will also be calculated.
Once data has all been analyzed, we, along with the organizations will make a public announcement about the results. The results will also be submitted for publication in a peer-reviewed journal.
Funding
In an effort to maintain research independence and prevent conflict of interest, our research team has elected to not accept funding from any agility organization for this project. Currently we are donating our time and resources as we feel that this project is extremely important. However, a project of this scale with the amount of data that will be collected requires significant time and resources. We are accepting donations from the public to help support this project and our students/staff involved in data collection, processing and analysis. For information and to donate please click here.
More information about our research team can be found here.
For those of you who are interested in a handler-reporting option for data collection on incidents, stay tuned for our next announcement!
Comments