# Clinical risk: How to determine the clinical utility of pre-conceived injury risk factors

| 9th March, 2017

### HAVE YOU CONSIDERED THE CLINICAL UTILITY OF YOUR RISK FACTORS?

An abundance of research has investigated injury risk factors in elite settings. To date, most of the literature has looked at the relative risk/risk ratio/odds ratio compared to a pre-defined reference group. However, the clinical utility of these pre-conceived risk factors is not often considered when thinking in absolute terms compared to the cohort’s baseline risk. Let us consider a figurative example; you may be at 3 times the risk of injury when exposed to an acute: chronic workload ratio (ACWR) > 1.5 compared to someone with a ACWR of 0.8 – 1.5. This “3 times” represents what is called a relative risk. A relative risk is the ratio of the risk of two groups (i.e. high v low ACWR groups). However, to truly interpret the severity of a risk factor we need to know the baseline risk (also defined as the base rate^{1}or pre-test probability) prior to a high-risk exposure. The baseline risk is calculated as the total number of injuries divided by the total training exposures for the cohort in question. Essentially, baseline risk equates to the risk associated with simply playing the sport. Following a high-risk exposure, if the increase in injury probability is small, or even reduced, the risk factor may not be clinically useful. Conversely, if we see a large increase, then a risk factor may be deemed clinically useful, independent of relative comparisons. To calculate what we will term “clinical risk”, you subtract the baseline risk (pre-test probability) from the injury probability associated with exposure to the high-risk range (post-test probability). See Figure 1 for a graphical illustration of the clinical risk for those exposed to >1.50 ACWR (dummy data presented).

### MAKING IN-ROADS: A LITERATURE EXAMPLE

An insightful paper by Ruddy and colleagues^{2}provided a primary example of such an approach. Here, the clinical utility of distance covered above 24 km/h in one week to “predict” hamstring strain injury in the subsequent week was assessed. To summarise their approach to clinical utility:

- Workload cut points (where sensitivity and specificity were maximised) were derived using receiver operator characteristics (ROC) curves. In this study >653 m of distance covered above 24 km/h was determined as the cut point where injury risk increased.

- Baseline risk (pre-test probability) = total no. of injuries/ total exposures
- Pre-test odds = pre-test probability/ (1- pre-test probability)
- Positive likelihood ratio = sensitivity/ (1- specificity)
- Post-test odds = pre-test odds x positive likelihood ratio
- Post-test probability = post-test odds/ (post-test odds + 1)
- Clinical risk [%] = (post-test probability – pre-test probability) *100

^{3 }explains this well in simple terms: “…the critical question is where the cut-off value separating high-risk and low-risk groups should be set. Sensitivity and specificity are inversely related. This means that if you want to capture all injured players (100% sensitivity), specificity suffers (more uninjured athletes will be classified as having high risk).” Specifically, sensitivity refers to the probability of >653m resulting in a hamstring strain injury, whilst specificity refers to the probability of <653 m not resulting in a hamstring strain injury.

### THE SKILL OF THE PRACTITIONER: DETERMINING ACCEPTABLE RISK

So, when do you act based off the clinical risk? The answer lies in what you deem as an acceptable risk. As described by Charlton and colleagues^{4}“…acceptable risk is that which the athlete, coach and practitioner are all willing to bear and is context specific.” Hence, there is no straight forward answer around actionable values; the skill of the practitioner lies in the consideration of the situational factors and context when deciding if an athlete should be modified or not train.

### BE WARY OF RELATIVE COMPARISONS

As a practitioner, it is critical to view injury risk from both a relative and clinical perspective to ensure decisions around training and/or game modifications are ultimately lowering the risk compared to that associated with participating in the sport! If you are interested in hearing more about clinical risk and the statistical analysis to facilitate this calculation, do not hesitate to contact us at support@smartabase.com.### A FREE WORKING EXAMPLE

To allow you to further explore the clinical utility of your own pre-conceived risk factors CLICK HERE for your**FREE**excel worksheet.

*Note:*To calculate sensitivity and specificity you will need a statistical package (R, SPSS, Stata, SAS etc.) to run a ROC analysis on your pre-conceived risk factors.

*Sheet 1:*Clinical Risk Calculator. Recreated from Ruddy and colleagues

^{3}(figure 3.). Substitute your values in the required input section. The output shows pre-test probability, post-test probability, and the subsequent clinical risk.

*Sheet 2:*Practical Example: An acute:chronic workload ratio example. Dummy data is presented here. Simple injury probability and relative risk values have been displayed to illustrate the difference between relative risk and clinical risk. Play around with the "Required Input" (no. of injuries and exposures,

### REFERENCES

- Dyk N, Bakken A, Targett S, Bahr R. There are many good reasons to screen your athletes. Aspetar Sports Medicine Journal 2017, Volume 6. Available online at: http://www.aspetar.com/journal/viewarticle.aspx?id=351#.WL_SDRKGMWo
- Ruddy JD, Pollard CW, Timmins RG
*, et al.*Running exposure is associated with the risk of hamstring strain injury in elite Australian footballers.*Br J Sports Med.*Published Online First: 24 November 2016. doi: 10.1136/bjsports-2016-096777 - Bahr R. Why screening tests to predict injury do not work—and probably never will…: a critical review.
*Br J Sports Med*2016;50**:**776-780. - Charlton PC, IIott D, Borgeaud R, Drew MK. Risky business: an example of what training load data can add to shared decision making in determining ‘acceptable risk’. J Sci Med Sport 2016. Published Online First 24 October 2017. doi: http://dx.doi.org/10.1016/j.jsams.2016.10.006
- Header image: The Herald Sun

*By Marcus Colby, PhD Candidate at The University of Western Australia*

Subscribe for Updates

Loading...

### Recent Articles

How Asking Better Questions and Eliminating Siloes Makes Athlete Data More Useful

How Olympic Committees, Institutes, and National Federations Use Technology for Multi-Disciplinary and Multi-Sport Injury Surveillance

Women leading from the front in Medicine, Sports, and the Military

Fusion Sport partners with the National Scouting Combine to give pro football hopefuls a way to get on recruiters’ radars

Pivoting in High Performance through the ‘New Normal’ | Sports Tech Feed Podcast