Periodic Inspection Must Be Done Under Stable Lab Conditions
To challenge the CFS expert's position that inconvenient data should be excluded for the purpose of calculating precision.
To obtain an admission that the local police service is not properly assessing accuracy and precision during periodic inspections because they are not assessing under stable conditions.
To illustrate the importance of an O'Connor order requiring production of officer's contemporaneous documentation at the time of the inconvenient data to determine the cause of the data that the CFS expert seeks to exclude, rather than speculating that the data is caused by something other than drift in accuracy and precision.
To consider the possibility that the recorded simulator temperatures (all exactly 3400) might be wrong and that there is a serious problem with documentation of simulator temperature.
Shouldn't field data that points to possible drift in precision imply the importance of lab data done properly to calculate precision? And if the lab data isn't done why shouldn't we explore the possible causes, trivial or non-trivial of the field data drift in precision? The Court should order the production of the contemporaneous documentation at the time of the inconvenient data. Which data would a statistician use?
Quaere: Does Crown expert hypothesis that one always excludes data outside 90 to 100 in calculating std dev, mathematically always result in a std dev less than 3? Is it a tautology? What would a statistician say?
Crown expert version (spreadsheet re-constructed by author - original never filed as an exhibit) of avg and std dev calculation excluding 11 inconvenient cal checks:
Defence version including 11 inconvenient cal. checks below 90:
Q. ...the data we got here, we’ve got multiple calibration checks that are outside the expectation. Outside the minimum and the maximum. A. Correct.
Q. Not just one...
A. We have 11.
Q. ...at the beginning of the testing. And but we have information, I mean, maybe in the not only that, jurisdiction where Dubrowski or whoever else is, who – who authored the article works, maybe they’re not tracking simulator temperature numbers but we know, to two decimal points what the simulator temperature is: 34 degrees Celsius right? A. Correct. Yes. Q. So, the problem that I have with your approach, in considering precision and throwing out this data, is that we don’t know the cause – we don’t know the reason why the data needs to be thrown out and that in effect you are hypothesizing – I’m not going to say speculating but – you’re hypothesizing, you’re raising – you’re considering possibilities as to why there might’ve been a problem with this data. But until we’ve examined what that problem was, until we’ve got some information about what troubleshooting attempts were taken, then we’re stuck with this data as it is. A. No. Essentially, because there is no
information about this data, that we have, with respect to these numbers, that’s why the numbers are being excluded. They’re outside the acceptable range and we have no information about why they are low. This could be an issue of troubleshooting. Right? Which means that they’re not part of a breath test. That’s what the data should be reflecting is the calibration checks associated with a breath test because that’s how we determine the accuracy and reliability of the breath test in each case, is looking at the calibration check. And these would not be associated with a breath test because they wouldn’t allow testing to proceed. So, until you have a result that’s successful, you wouldn’t use those data points. Q. So, we close our eyes to the cause of the data being low ‘cause after all, one of the causes – possible causes of the data being low, is that the instrument has drifted in its accuracy and precision. A. Well, the other calibration checks would suggest not. Q. Well, except that if we look at all of the numbers that are listed there, we have – and even if we exclude those 11 data points you came up with an average of – what was it, 95 point something or other? A. Sorry, just the 39 points?
Q. With the 39 points.
A. With the 39 points, 95.05.
Q. All right. Well, that’s not – that’s not – outside the analytical variability that Terry and now that’s Martin reported in – when she was doing her evaluation, when she did her paper on the Intoxilyzer 8000C. A. Correct. Because this is field data, not lab data. Laboratory data, as I said previously, clearly will be more controlled and will be a more accurate reflection of
the variability associated with the instrument using calibration checks compared to field data, which is performed by qualified technicians, it’s going to have much more variability and it’s not an evaluation parameter that you use to determine whether that is acceptable or not, but operational requirements. Q. And the manufacture.... A. Plus or minus 10 – plus or minus 10 milligrams of alcohol in 100 millilitres of blood of the expected result.
Q. And your average is outside of the manufacturer’s specification of plus or minus 3 percent.
A. For returning the instrument to its original specifications, yes, but this is an instrument that’s being used in the field as opposed to one that’s being recalibrated or reset. Q. Well, I thought that the reason why you do periodic inspections of instruments is to make sure, and this is the wording of the best practices document that’s one of the exhibits... A. Yes. Q. ...is to ensure that the instrument is still performing in accordance with manufacturer’s specifications. A. Yes. You can’t do that with field data. You have to use, as we said here, this data here would be a more accurate reflection of that. Right? The I-T-Ps and the wet bath – wet bath calibration checks. These would be more reflective of the analytical variability associated with the instrument and the accuracy. Q. And the average you came out to was what? Ninety-five point....
A. Ninety-five point zero-five. So, I believe I said before we actually did the calculation that the average would go up and the standard deviation would go down, which it has. And then.... Q. Ninety-five point O-five is still not within 3 percent, still not in accordance with the manufacturer’s specification.
A. Correct. Because this is field data, not laboratory data.
Q. So, you’re suggesting that the only way that we can assess accuracy and precision of an instrument is by putting an instrument into a laboratory and having periodic inspections. A. Correct. Not field data, because it’s a widely known phenomenon that when you take data from the field, such as breath – breath – blood/breath ratios that when you look at field data, the data is extremely wide, but when you do controlled experiments in a laboratory that the blood/breath ration range becomes much, much more smaller. Q. So, the proper methodology for assessing whether or not the instrument annually or periodically is performing in accordance with the manufacturer’s specifications is to put it in a laboratory and test it. A. Or put it into an environment where it’s more stable. So, when you have the qualified technician or the – not the qualified technician – but the program coordinator who’s running these tests, you have them do it under more stable conditions and that would be a more accurate reflection of what the actual variability associated with the instrument is.
Q. When we looked at the cal-check – at the – at the stability tests for this instrument, the periodic
inspections, the ones that you referred to this morning...
Q. ...that we were looking at, it was obvious doing those on a RIDE truck.
Q. That’s not a laboratory.
A. Correct. And again, in a RIDE truck would be – there’d be more variability associated with that.