Dairy practitioners and producers are struggling daily with monitoring production and diseases on dairies. To achieve this, they will, or, more often, their herd health software will compute different indices that they can consult. They then have to decide whether the value reported deviates too much from an acceptable value and if they need to intervene or apply additional diagnostic tests to better define the problem. By definition, there are no perfect indices. If they were perfects, we would not call them «indices»! They all suffer from different imperfections and these must be acknowledged when interpreting them. Moreover, it can be quite difficult to decide when a value is too far from the norm.
In this article, we will discuss the different characteristics of a good indicator. Then, we will present the indices that are readily available in many farms for monitoring udder health and discuss their advantages and drawbacks. Finally, we will discuss how “hood of the truck” statistical inferences can help decide when a deviation from the norm should be addressed.
PART1 | Characteristics of a good indicator
How do indices look like?
Indices can be very simple measures such as a mean [e.g., the mean somatic cell count (SCC) on first Dairy Herd Improvement (DHI) test following calving], or a proportion (e.g., the proportion of cows with a SCC above 200,000 cells/ml on first DHI test following calving). In the case of a proportion, two values, a numerator and a denominator, are used for computation and, therefore, a problem with one of these two values, or with both, can lead to erroneous conclusions. Some indices are a bit more complex than proportions, and will take into account two different values for the denominator, most often a number of animals and a time period. These indices are usually described as rates (e.g., an incidence or an elimination rate). For instance, one would possibly want to compute the number of cows with SCC >200,000 cells/ml on a given DHI test among cows with SCC <200,000 cells/ml on the preceding test, while taking into account that there was 45 days between DHI tests. We could refer to that indicator as a rate of new infection (or, actually, of new inflammation if we want to be very rigorous). In such case, an error in any of the three values used for computation (i.e., the number of cases, the number of animals at risk, or the time period) could then lead to misleading conclusions. Figure 1 presents examples of mean-, proportion-, and rate-based indices.
Means and proportions are usually a snapshot of the herd at a given time (e.g., mean SCC today; proportion of high SCC cows today), while rates usually provide information on changes occurring over time (e.g., how many new high SCC cows appeared in my 55 cows herd in the last few months?). Finally, we always have to keep in mind that any indicator is usually computed for a specific group of animals (e.g., milking cows, heifers, … ) and a specific period of time (e.g., last 1.5 month, last 3 months).
What are the attributes of good indices?
First, a good indicator should be unbiased. A bias is a systematic error occurring due to the collection or interpretation of data. For instance, a producer may not record clinical mastitis (CM) on cows with a known udder health status (e.g., a Staphylococcus aureus positive cow), cows that already experienced CM in the same lactation, or cows on the culling list. Most producers actually record disease events to help managing individual animals. Thus, they do not readily see the value of recording a CM case on a cow that is on the culling list. This recording will not lead to any additional management decision for that specific cow. However, when these disease events are used for computing indices used for monitoring health, these missing cases will introduce a bias in our indices. Using the preceding example, the numerator (i.e., the number of CM cases) would be wrong and the rate of CM would be systematically underestimated in that herd. Thus, we may wrongly think that this herd is doing fine regarding CM, while it may not.
A bias can also be introduced due to the diagnostic test used for identifying cases. If a producer struggles to detect CM cases in general (i.e., if he/she is not «sensitive» enough to find and record all cases), then using his/her records, we would again underestimate the rate of CM in his/her herd. Figure 2 illustrates the differences between biased and unbiased measurements.
As mentioned, a bias can also be introduced when interpreting data. A good example of that is when interpreting the mean SCC in a herd. In Figure 3 we illustrate the distribution of individual cow SCC in two hypothetical 100 milking cows herds. When computing the mean SCC for these herds, we would obtained 99,000 and 141,000 cells/ml for Herd A and B, respectively. Thus, we would be led to think that the udder health situation is better in herd A (although these would be considered two very good herds). When evaluating these distributions more thoroughly, however, you will probably observe that many cows in Herd B are actually doing better than most cows in Herd A. It is just that, in Herd B, there are six cows with more extreme SCC values that are literally pumping up the mean SCC. When computing the mean somatic cell score (SCS, also known as the SCC linear score), which is a logarithmic transformation of the SCC that will take into account these six extreme SCC values in a more appropriate way, Herd A would have a mean SCS of 2.94 and Herd B would have a mean SCS of 2.93; almost identical with a slight advantage for Herd B. Median SCC, which is another statistic for comparing central tendency known to work better when extreme values are observed, would be of 97,000 cells/ml for Herd A vs. 85,000 cells/ml for Herd B, again indicating slightly better udder health in Herd B. In that example, the indicator itself (i.e., the mean SCC) was not biased, but our interpretation of that indicator led us to erroneous conclusions. Visualizing the SCC distributions (e.g., Figure 3) is probably one of the best way to avoid being misled by an indicator such as the mean SCC.
A given indicator can also be described by its precision. The precision of an indicator is influenced by the number of animals (or observations) used to compute that indicator (i.e., the sample size), the natural variation in the biological process we are trying to measure, and precision of our measurements. Think of it this way: if only five cows calved in the last month and one had CM in her first 30 DIM, you could say that the 0-30 DIM CM incidence is 20 CM cases per 100 cows (1 case/5 cows*100) in that month. If we would have observed one additional CM case, we would have computed 40 cases per 100 cows, a huge difference in the incidence for an actual one case difference. On the other hand, if we extend our follow up period to include the last 50 calvings (the last 12 months) and observed a total of 10 cases, we would also compute a rate of 20 cases per 100 cows, but observing one additional CM case would only lead to an incidence of 22 cases per 100 cows. Thus, if a very small number of animals are used to compute a given indicator, this indicator will vary widely with each additional case, and will be difficult to interpret. As we have seen, a potential solution is to include more animals in our calculation (e.g., by extending the follow up period). But, as we will see later, there are also some disadvantages to that.
Likewise, if there is a lot of variation in the biological process we are trying to measure, it will become difficult to disentangle a disruption from normality from the normal expected variation. In Figure 4 we illustrated the SCS of a 3-cow herd (a herd with only 3 cows probably exists somewhere, right?) measured at two different times. On the left side (scenario A), there is very little SCS variation between the 3 cows. It is thus quite clear visually (and probably statistically) that the mean SCS at the first evaluation (3.0) is lower than that of the second measurement (4.0) collected a few weeks later. We would possibly conclude that some kind of disruption of the normal process happened and that we need to intervene.
For scenario B, the same mean SCS are observed (3.0 for date 1 vs, 4.0 for date 2), but we now see more between-cow variation. It is now very difficult to appraise if a change has occurred between the first and second period of observation and, thus, to decide if something has to be modified on the farm. Note that a lack of precision with the tool we are using for measuring our outcome (e.g., using the California mastitis test as compared to a lab-based cell counter) would affect our ability to draw any conclusion in the exact same manner.
We have discussed about how increasing the number of animals used for a given calculation can increase the precision of an indicator, for instance by increasing the follow up time from the last month to the last 12 months in our <30 DIM CM incidence example. One major drawback, though, is that these historical data will blur the changes (good or bad) that may have occurred recently. We refer to that characteristic of an indicator as its momentum. As the ratio between historical and recent data increases, the momentum of an indicator will increase. An indicator with very high momentum will be practically useless for detecting a recent change in the udder health status.
The last characteristic of an indicator that we will discuss is its lag. The lag of an indicator is defined as the time it takes between the moment that health events occurred and the time it can be measured. For instance, we know that many new intramammary infections (IMI) will be acquired at the very beginning of the dry period. If we want to investigate whether cows are becoming infected during the dry period we could possibly compare their last pre-dry SCC measurement to their first post-calving SCC measurement. We could hypothesize that a cow with pre-dry SCC < 200,000 cells/ml and post-calving SCC >200,000 cells/ml did acquire a new IMI, and, possibly, that it was acquired at the beginning of the dry period. But, since we need to wait for a first post-calving SCC measurement to decide whether a new IMI was acquired, this indicator would have a lag of roughly 60 to 90 days (with a 60 day dry-period, and a first DHI test between 0-30 DIM). Thus, if we make a modification to the drying-off routine, with that indicator one would have to wait for at least 60 to 90 days to see if the modification had any impact.
PART 2 | Indices for udder health monitoring
There are two major components of “mastitis” that should be monitored: 1) CM cases (visibly abnormal milk and/or abnormal mammary gland and/or abnormal cow); and 2) subclinical mastitis, a measurable inflammation that cannot be detected with the naked eye and that is usually caused by IMI by various pathogenic bacterial species. For each of these components, various indicators can be used and these can be computed for the whole herd over a long period of time or for subgroup of cows (early vs. late lactation; first vs. second lactation; etc) and/or short period of time (e.g., the last month). When computing any of the following indices, you will have to consider which groups of cows are included and the period of time that will be covered. Make sure to keep in mind the concepts presented in the first part of this article (bias, precision, momentum, and lag) when deciding on cows included and period of time covered as these choices will likely have a tremendous impact on the usefulness of your indicators.
Clinical mastitis monitoring
For monitoring CM, typically we will want to monitor: 1) the incidence rate of CM (IRCM) in general (reported in number of cases/100 cow-year; equation 1); and 2) the incidence rate of 1st CM cases (IRCM-first, reported in number of cows with ≥1 CM cases/100 cow-year; equation 2). Evaluating together IRCM and IRCM-first will help monitor CM incidence, but will also help determine whether recurrence of CM is a problem (possibly indicative of a low bacteriological cure rate).
Clinical mastitis case-definition is likely to vary from one farm to another. Some may not record mild CM cases, others may record only antimicrobial-treated cases (for milk quality issues), etc. These differences between producers must be accounted for when using these indices on a given farm. For instance, if we know that a producer is not recording mild CM cases then we know the IRCM will be underestimated. Proportions of CM cases that are mild, moderate, and severe were reported to be around 50-60, 20-35, and 10-20%, respectively (1-4). If we know that a producer is not recording mild cases, for instance, we could therefore estimate that the computed IRCM is probably half (40-50%) of what it should be.
Moreover, for computing IRCM one has to decide on the minimal number of days between two CM events on a same cow to define a following CM event as a “new” case (compared to having a same CM case persisting over time). Typically, a minimum time-lag of 10 to 14 days between two recorded CM events was suggested for defining a new CM case (5). Finally, for repeated CM cases, we may wonder if CM cases occurring within a few days, but in different mammary gland quarters should be considered as different or a same CM case. In general, for simplicity, and because quarter location is not always recorded, we can possibly ignore whether it was the same quarter or not, and simply record CM cases at the cow-level.
A few articles have reported how IRCM and/or IRCM-first varies across a large number of herds (1, 2, 6). Verbeke et al. (1) reported a mean IRCM of 26 cases/100 cow-year in Belgium. In Canada in 2008, a mean IRCM of 23 cases/100 cow-year was reported, with 25th and 75th percentiles around 10 and 25 cases/100 cow-year (2). Later in 2014, Elghafghuf et al. (7) reported for Canada a mean IRCM-first of 21 cases/100 cow-year with 25th and 75th percentiles of 12 and 28 cases/100 cow-year, respectively. These values can possibly be used for benchmarking. However, it is likely that the values obtained from these commercial herds through research, may differ from those that would be obtained from regular monitoring. Thus, the utilities of these indices may become clearer when using them to monitor change occurring over time in a given herd, or following a modification of the management, as we will see in the last section of our article.
For monitoring activities, IRCM and IRCM-first can be presented graphically (see Figure 5 and Figure 6 for examples) or using tables (see Table 1). Either ways, in addition to the rate reported in cases/100 cow-year, reporting the actual number of CM cases and of milking cows (i.e., the numerator and denominator) maybe very useful. Remember, these will possibly affect the precision of your indicator. Finally, it can be useful to present these indices by period of the production cycle (e.g., early vs. late lactation, by month of lactation), time of the year (e.g., by calendar month), or group of cows (e.g., first lactation vs. older cows). These presentations could be helpful for a better understanding of a problematic situation. The drawback, however, will always be the reduced number of animals included in the calculation (i.e., loss of precision). In some cases, the loss of precision can be mitigated by considering longer periods (e.g., IRCM in first lactation cows for the last 12 months). Then, an increase momentum of the indicator may become an issue…
Subclinical mastitis monitoring
By definition, subclinical mastitis cases cannot be recorded as they arise, instead, we have to rely on some diagnostic test to determine which cows are affected. The most commonly used subclinical mastitis diagnostic test is probably SCC. Bulk milk SCC is analyzed on most milk shipments, in many countries, for milk quality monitoring. Individual cow SCC is also measured 6 to 12 times/year on a large proportion of herds in many countries. These latter measurements can be of great help for monitoring subclinical mastitis during the lactation (8) and over the dry period (9). An individual cow SCC ≥200,000 cells/ml on a composite milk sample (i.e., a pool of milk from the four quarters) was proposed as a case-definition for subclinical mastitis (10).
Monitoring subclinical mastitis in lactating cows. Possibly the simplest usage one can make of SCC data, is to simply report the mean SCC or, even better, the mean SCS of a herd as a function of time. Alternatively, one could report the absolute number or proportion of cows within a given range of SCC. For instance, using the different plots presented in Figure 7, one could conclude that an important rise in SCC was observed in that herd between October-November (A), and that it was mainly affecting 2nd and ≥ 3rd lactation cows (B), during early lactation (C). Looking at Figure 7.D, we can see that it was the number of high SCC cows (cows with SCC > 500,000 cells/ml) that gradually increased in October-December. A SCC reduction was achieved in the subsequent months, mainly through a reduction in the absolute number of high SCC cows (that went from 10 to 13 cows, in October-November, to 4 to 6 cows in April-June; possibly through culling or cure).
Individual somatic cell count measurements can also be used to monitor change of the cow’s status over time. In 2003, Schukken et al., schematized the dynamics of IMI in a dairy herd as presented in Figure 8 (11).
Since SCC are measured repeatedly on the same animals during the lactation, there is an opportunity for monitoring the infection and cure rates (the middle part of the schema) rather than just the endpoint (i.e., the proportion of infected cows, or mean SCC or SCS). Some authors have proposed to use change in SCC to compute incidence and cure rates of subclinical mastitis between two DHI tests during the lactation (8, 12). Equation 3 and 4 present the details for these calculations.
Once computed, lactational subclinical mastitis incidence and cure rates can be plotted in a figure, presented in a table, etc. Table 2 provides an example of a table reporting these indicators with both absolute (always nice to have so the denominator is known) and relative values.
Fauteux et al. (12) reported in 2014 in QC, Canada 25th and 75th percentiles of respectively 8 and 14 new cases/100 cows-month for the subclinical mastitis incidence rate, and of 21 and 32 new cure/100 cows-month for the cure rate. Using these values, we could estimate that, between these two DHI tests, the herd presented in Table 2 was quite close from the average 2014 Quebec herd in terms of subclinical mastitis incidence rate, but actually better than most regarding cure rate. Again, comparing to some benchmarks is an interesting way to interpret these indicators, but keep in mind that the value of the top herd in a population in 2014, may be just that of the average herd 10 to 20 years later, or in a different population/country. Again, the most useful usage for these indicators will possibly be for identifying departure from the norm (improvement or deterioration) over time within a herd as we will discuss in the last section of this article.
Monitoring subclinical mastitis over the dry period.
In the same manner, SCC can be used to monitor dry period subclinical mastitis dynamics. For that purpose, the SCC of the last DHI test prior to drying-off and of the first DHI test following calving will be used. For simplification, the exact number of days between DHI tests is often disregarded and the incidence and cure rates over the dry period will thus be reported in number of new cases (or new cure) per 100 cows. In that case, the time period is implicit and is simply a “typical dry period length” rather than an actual number of days (it can still be defined as a rate, though). Equations 5 and 6 illustrate the details for computing these two indices and Table 2 presents an example on how they can be reported. To make sure that these indices are truly reporting on dynamics during the dry period (rather than what happened in the months before drying-off or after calving), some authors have suggested to only use data from cows with a pre-dry DHI test collected close enough to dry off and a post-calving DHI test close enough to calving (e.g., < 30 day). Using these indices, one can then monitor, for instance, whether a change in the drying off procedures resulted in an improvement in subclinical mastitis cure rate or a reduction of the incidence rate during the dry period.
Monitoring 1st lactation cows at calving.
Finally, nowadays, we know very well that heifers are not all healthy at first calving. Heifers are the main (and sometimes the only) source of replacement in the lactating herd. If a large number of heifers are already having subclinical mastitis at calving, then it becomes very difficult to improve the udder health situation of the lactating herd with a culling and replacement strategy. Thus, monitoring early lactation heifers subclinical mastitis is important. This can be achieve using SCC and, in that case, simply reporting the proportion of heifers with SCC above a certain threshold (200,000 cells/ml is often used) on their first DHI test will often be sufficient. Fauteux et al. have reported 25th and 75th percentiles of 8 and 25% regarding proportion of heifers with subclinical mastitis at first DHI test in 2014 in QC, Canada (12).
Wrap-up mastitis indices
The indices presented are some of the most important for mastitis monitoring and capture the key concepts to quantify the different components of mastitis. Of course, there are many other interesting indices that are computed by different dairy herd management software and that we could not discuss in this article (e.g., culling due to CM or high SCC). These can also be useful in various situations. Moreover, the indices discussed can be presented in many different manners. For instance, monthly lactational subclinical mastitis incidence rate could be plotted as function of calendar months to assess change in this indicator over time.
Part 3) Hood of the truck statistics for identifying deviation from normality
Now, the main question is: for a given change in an indicator, how high is too high (or how low is too low)? In Figure 7A, the mean herd SCC went from the 150-200,000 cells/ml range to the 300-350,000 cells/ml range for three consecutive months (October-December). Should we be alarmed of this or not? With large changes such as that one, you probably would not need any kind of statistic to decide that you need to do something about it. But, if you look more carefully, you will notice that the SCC was already on the rise in July and September in that herd. Perhaps that, using some kind of statistical monitoring, could have helped identifying the rise in SCC earlier. Then, we might have been able to intervene and prevent or mitigate these three horrible and stressful months!
Most statistical inference techniques were developed based on the frequentist framework, which is perfectly suited for experimental research, but quite irrelevant in a commercial dairy farms context. This framework is based on two pillars: randomization and a null hypothesis. Using these, we can compute the probability of a given outcome under that null hypothesis. Here is the general idea, let say we would randomly assign cows to two groups, vaccinated and a placebo. And our hypothesis is that this vaccine is useless (this is our null hypothesis). Now let’s say the mean SCC for vaccinated cows was 150,000 cells/ml and, for placebo cows, 187,000 cells/ml. Using the frequentist framework, one would ask: if I was to repeat the same study an infinite number of time, and given that the vaccine has no effect (i.e., given that our null hypothesis is true), in what proportion of these studies would I see a difference of 37,000 cells/ml between the two groups? Since randomization was used, this probability can be computed. Let say that this probability is 34% (i.e., a P-value = 0.34). Then we would say: well, if it is correct that the vaccine has no effect, I would see a difference of 37,000 cells/ml between the two groups 34% of the time! So that kind of difference could simply be due to chance (i.e., to the randomization process). On the other hand, if the difference observed would be expected only 1% of the time (i.e., a P-value = 0.01), then we would conclude that it was almost impossible to observe that difference if the vaccine has no effect and, thus, that the vaccine must have an effect (i.e., we will reject the null hypothesis)!
When we try to translate these concepts into (udder) health monitoring it becomes quite evident that the frequentist framework is irrelevant. Let’s say that the mean SCC was 150,000 cells/ml in July in a given herd and that it is now 187,000 cells/ml in September. Technically the formula needed to compute a P-value can be applied. But remember that the interpretation of that P-value is based on the fact that cows were randomized to either the July or September group… But this was not the case. Now let’s say, that these SCC are those of 1st and 2nd lactation cows, and you are wondering if SCC is statistically higher in a group vs. the other. How would you randomize cows to a given age group? If you know the answer to that, please make me 25 year old again!
OK, so it is quite clear that regular inferential statistics are letting us down on that one, but more general descriptive statistics can still be very useful. One method that is used in many herd management, activity monitoring devices, and robotic milkers software is statistical process control (SPC). Statistical process control is an analytic approach that uses the observed variations to confirm, with a certain level of certainty, when performance is improving, stable, or worsening. Use of SPC for monitoring herd performances was very well reviewed by Reneau and Lukas in 2006 (13). Interested readers could certainly refer to that article for a more detailed description of these techniques. But, here is the general idea: we can measure, for instance, bulk milk SCC weekly over one year (roughly 52 measurements). Using these measures, we can compute a mean SCC for the last 12 months and the standard deviation (SD) of these SCC measures (which is a descriptive statistic describing variation around the mean SCC). All these SCC measurements can be plotted on a figure with the mean SCC, and upper and lower control limits corresponding to three times the SD above and under the mean SCC (see Figure 9 as an example). Then, a set of criteria can be used to determine whether a given observation (e.g., the last measured bulk milk SCC) deviates from what is normally observed in that herd. For instance, for a mean we would conclude to a significant deviation if:
- An observation is > 3 SD from the mean or;
- > 9 successive observations are on the same side of the mean or;
- > 2 out of 3 successive observations are > 2 SD from the mean and on the same side or;
- > 4 out of 5 successive observations are > 1 SD from the mean and on the same side.
For instance, using Figure 9, we would have concluded that bulk milk SCC was lower than usual in the period between weeks 16 to 21 (> 4 out of 5 successive observations > 1 SD from the mean and on the same side). That would be the right time to give those milk quality bonuses to your milkers! However, bulk milk SCC was higher than usual on week 32 (that single bulk milk SCC measure was > 3 SD from the mean). Such an observation should trigger an investigation/intervention. On the current week (week 52), we would consider that this herd is performing within the limits we are used to observe.
Different charts and set of criteria were proposed for proportions and rates, but the general idea remains the same. In Figure 5, SPC was used to detect abnormal deviations for a proportion (i.e., the number of CM cases among the last 42 calvings).
Figure 9. A chart of bulk milk SCC every other week measurements that can be used for monitoring milk quality using statistical process control. Mean bulk milk for the 52 weeks period (dashed line) and lower and upper limits (corresponding to the mean +/- 3 standard deviation; dotted lines) are presented. Every dot represents one bulk milk SCC measure. Dots in red are observations that deviate from what is usually observed in that herd, based on the set of criteria described in Reneau and Lukas, 2006.
Nowadays, many herd management, robotic milkers, and activity monitoring software will compute and present various indices that can be used for udder health monitoring. We always need to be cautious, however, when interpreting these indices. In some cases, due to the way they are computed, these indices may be biased, imprecise, suffer from a high momentum, or have a long lag time. Having a good understanding of the way these indices are built certainly helps avoiding any misleading interpretation. Regarding udder health, many specific indices were proposed for monitoring clinical and subclinical mastitis. Some have proposed benchmarks for many of these udder health indices. We must recognize, however, that these benchmarks can hardly be used in a different population or period. Finally, while more traditional frequentist statistical inferences may not be appropriate to guide our decisions regarding the significance of a given deviation from normality, other techniques based on descriptive statistics have been proposed and are readily implemented in various dairy software. These methods can be of great help to quickly visualize and interpret the massive data that are generated nowadays in many modern dairy herds.
Text: Simon Dufour (firstname.lastname@example.org), Pierre-Alexandre Morin, and Jean-Philippe Roy – Faculté de médecine Vétérinaire, Université de Montréal, Canada – Mastitis Network, Canada
Pictures: Marco Langlois
- Verbeke J, Piepers S, Supre K, De Vliegher S. Pathogen-specific incidence rate of clinical mastitis in Flemish dairy herds, severity, and association with herd hygiene. J Dairy Sci. 2014;97:6926-34.
- Olde Riekerink RG, Barkema HW, Kelton DF, Scholl DT. Incidence rate of clinical mastitis on Canadian dairy farms. J Dairy Sci. 2008;91:1366-77.
- Aghamohammadi M, Haine D, Kelton DF, Barkema HW, Hogeveen H, Keefe GP, Dufour S. Herd-Level Mastitis-Associated Costs on Canadian Dairy Farms. Front Vet Sci. 2018;5:100.
- Oliveira L, Hulland C, Ruegg PL. Characterization of clinical mastitis occurring in cows on 50 large dairy herds in Wisconsin. J Dairy Sci. 2013;96:7538-49.
- Jamali H, Barkema HW, Jacques M, Lavallee-Bourget EM, Malouin F, Saini V, Stryhn H, Dufour S. Invited review: Incidence, risk factors, and effects of clinical mastitis recurrence in dairy cows. J Dairy Sci. 2018;101:4729-46.
- Naqvi SA, De Buck J, Dufour S, Barkema HW. Udder health in Canadian dairy heifers during early lactation. J Dairy Sci. 2018;101:3233-47.
- Elghafghuf A, Dufour S, Reyher K, Dohoo I, Stryhn H. Survival analysis of clinical mastitis data using a nested frailty Cox model fit as a mixed-effects Poisson model. Prev Vet Med. 2014;117:456-68.
- Dufour S, Dohoo IR. Monitoring herd incidence of intramammary infection in lactating cows using repeated longitudinal somatic cell count measurements. J Dairy Sci. 2013;96:1568-80.
- Dufour S, Dohoo IR. Monitoring dry period intramammary infection incidence and elimination rates using somatic cell count measurements. J Dairy Sci. 2012; 95:7173-85.
- Dohoo IR, Leslie KE. Evaluation of changes in somatic cell counts as indicators of new intramammary infections. Prev Vet Med. 1991;10:225-37.
- Schukken YH, Wilson DJ, Welcome F, Garrison-Tikofsky L, Gonzalez RN. Monitoring udder health and milk quality using somatic cell counts. Vet Res. 2003;34:579-96.
- Fauteux V, Roy JP, Scholl DT, Bouchard E. Benchmarks for evaluation and comparison of udder health status using monthly individual somatic cell count. Can Vet J. 2014;55:741-8.
- Reneau JK, Lukas J. Using statistical process control methods to improve herd performance. Vet Clin North Am Food Anim Pract. 2006;22:171-93.