- On February 23, 2017
- analysis, analysis, analysis, analysis, analysis, data, data, data, data, data, data, data, data, design, design, light, light, metric, metric, reporting, reporting, statistics, statistics, support, support, systems, systems, traffic, traffic, variation, variation
Using traffic light systems is one of the most common reporting methods we see at Fusion Sport. When designed well, they can communicate complex information to coaches who lack statistics training.
Traffic light systems aim to help coaches and practitioners make decisions. But do these methods work? In a recently published invited review1, Sam Robertson, Jon Bartlett and Paul Gastin explore this question in detail.
This blog post will attempt to summarise some of their key findings, although I strongly encourage you to read their original article in full.
RED, AMBER, GREEN
In a typical traffic light system (formally referred to in the literature as a type of “decision support system”), colour coding is used to indicate the health status of an athlete across a range of performance metrics.
These metrics may cover a range of areas such as physiological, medical, performance, availability and wellness. Generally, for each relevant metric, an athlete will be coloured either Red, Amber or Green. These colours mean something along the lines of “alarm – action required”, “caution – monitor” and “continue as normal”, respectively1.
A coach/practitioner can use the information created by a traffic light system to help make decisions about an athlete’s training program. The goal is to prevent an athlete getting injured; to ensure that the athlete is at peak performance; and/or to assess the effectiveness of a training modality2.
Figure 1: A typical traffic light system shows the status of an athlete along various measures in Red, Amber or Green.
HOW VALID ARE THEY?
Assessing the effectiveness of a traffic light system is not trivial.
While we could compare the ‘performance’ of our athletes before, and then after, implementing such a system, the reality is that performance as a measure has been particularly hard to define in a team sport context3. So maybe we could use the difference in injury incidence as a measure of the quality of a traffic light system? Unfortunately, from a statistical standpoint, injuries occur so infrequently that they make a poor measure for assessing the outcome of the system1.
But it’s not all doom and gloom! Given the sheer magnitude of data currently collected by sporting organisations, and not to mention the rise of data mining4, the solution to many of these problems are mitigated with the right approach.
Below are four key steps to designing traffic light systems, as identified by Robertson et al.1
FOUR STEPS TO DESIGNING A GOOD TRAFFIC LIGHT SYSTEM
Step 1: You don’t have to include ALL the data
With the amount of data collected in modern sport, carefully choosing which data is included is important. Not only is there a lot of data, some of it measures similar things.
The ability to reduce the amount of data you collect (applying Occam’s razor to your data5) will make data collection more efficient; i.e., data audits and cost-benefit analyses around data collection processes will help in this regard1.
While continuously recording GPS, HR and nutrition for every athlete might supply some rich data sets, the cost of collecting, processing, analysing and then storing that data may outweigh the benefit of presenting that data to the coaches in the first place.
Step 2: Decide beforehand on how to analyse and process the data.
Given the complexity of an athlete’s response to various training inputs, care must be taken with how each data set is processed. While raw, unprocessed data is generally preferred1, it can make comparisons between athletes difficult.
One recommendation is to utilise relative measures that account for individual variation; e.g. percentage changes, changes from baseline, or z-scores. Unfortunately, some of these relative approaches can be hard to explain to coaches 1 and may require a bit of statistical training. So, make sure you understand how and why to use them.
Step 3: Decide on how to interpret the outcome.
Choosing how to define a meaningful change is particularly important. A meaningful change helps to identify a response that is outside a measure’s expected variation. It is a response that we either need to note and monitor or act upon. As indicated by Robertson et al.1, there are many methods for determining this in the literature.
When choosing a metric, one should keep in mind Anscomb’s Quartet. Shown below, these four data sets each have the same mean, variation and line of best fit, but the data structure in each situation is obviously quite different. This famous example highlights the need to have a good understanding of your data before making decisions.
Figure 2: Anscombe’s Quartet is a famous dataset.
All 4 sets of data are quite different but have the same values for various summary statistics.
Step 4: Create a tool to communicate the findings
One of the appeals of traffic light systems is their intuitiveness, but maintaining this intuitiveness while simultaneously representing complex data is challenging. It is thus important to
- Use appropriate plots to visualise the underlying data structure
- Ensure that comparisons between and within individuals can be made
- Implement automatic conditional formatting and colour coding.
These are all key aspects of a good traffic light system1.
Figure 3: An example of a traffic light system tracking longitudinal data
Traffic light systems are very useful and intuitive to use. Designing them correctly with some standardised, data-driven approaches can significantly improve their validity, which ultimately helps coaches to maintain an understanding of complex collections of data. Make sure you consider the types of questions you are trying to answer and use your data to drive your design choices. As always, any questions, hit us up at email@example.com!
- Robertson S, Bartlett JD & Gastin PB (2016). Red, Amber or Green? Athlete Monitoring in Team Sport: The Need for Decision Support Systems. IJSPP 1–24.
- Taylor K, Chapman D, Cronin J, Newton MJ, Gill N (2012). Fatigue monitoring in high performance sport: a survey of current trends. J Aust Strength Cond. 20:12-23.
- Duch J, Waitzman JS, Amaral LA. Quantifying the performance of individual players in a team activity (2010). PloS One. 5:e10937.
- Ofoghi B, Zeleznikow J, MacMahon, C, and Raab, M (2013). Data Mining in Elite Sports: A Review and a Framework. Meas In Phys Ed & Exercise Sci. 17(3)
- Coutts AJ, Zavorsky GS, Galy O, Pyne DB, Guy JH, Edwards AM (2014). In the age of technology, Occam’s razor still applies. Int J Sports Physiol Perform. 9:741.