Questions from the Field ...
“Can’t I just pick ‘problem plots’ to check cruise?”
In many cases, a quick office review of plot cards indicates a potential problem. In almost everything we do in forest inventory, we have to choose a valid sample from all of the alternatives, but not always. It depends upon the question to be answered.
Check cruising is done for several possible reasons.
If you are looking for plots that illustrate some principle for training purposes, you can choose plots arbitrarily. No need to sample. No need to record results.
If you want to adjust the inventory based on the check cruise (or at least compute the upper and lower limit of the bias due to “measurement errors”) then some sort of sample is required. The plots do not have to be visited with the same probability, but they all need some probability of being chosen, and a valid sample is required.
The issue of credibility is particularly important. In such a case you do not want to destroy the actual statistical correctness of the process or the appearance of correctness. For this purpose, you certainly should be taking a valid sample.
Only the last case (payment) seems to lend itself to the arbitrary choice of check cruise plots, and only in certain situations. Suppose the question is this:
Was there at least a 6% measurement error in the data?
Presumably, in this case payment would be refused. If you did not care WHAT the actual amount of error was (only that it was more than 6%), you could pick the plots to visit and check. There is a hitch — you must be prepared to assume that the unchecked plots contained no errors at all. If the rule is that the error in total volume can be no more than 6%, the plot volumes must also be considered.
Suppose that you have 10 plots in the cruise. One of the plots has suspicious recorded information, and it is responsible for 16% of the total volume computed by those 10 plots. The rest of the field cards look good. You might decide to visit that plot. This is NOT a sample.
On the other hand, you find a 40% error on that plot. What is the overall error? You do not know, because this is not a sample that can tell you about the rest of the data. What you DO know is that the error is at least (40% * 16% = 6.4 %).
This assumes that there is nothing wrong with the rest of the data. At least a 6.4% measurement error is proven (without doubt, please note) by that single plot. The actual error rate might be quite a bit more than that, since the other plots could also contain errors, but you cannot conclude that from using this non-sample data.
What if you arbitrarily select 3 plots? Then calculate the minimum total error in the 3 plots by the same method.
Total confirmed error :
Notice how high the errors have to be before you can prove an overall error of only about 7%.
The advantage of a sample is that it will apply to ALL the plots checked, not just the ones you visited. This non-sampling method is probably not a good idea, in general, but it points out a technique that might be useful in specialized situations.
strata 1 (expected large errors)
For all 1,500 acres the measurement error is:
This is the error rate per acre, so it can be compared to the average volume per acre to decide if the percentage is excessive.
When payment is involved,
most people would suggest that these kinds of computations be made after
first giving the benefit of the doubt to the cruiser, who was probably
working under tougher conditions.
You can also compute the combined sampling error of each check cruising strata. Any sampling text will have the formula for the combined error of a stratified cruise. This sampling error is seldom needed for check cruising applications, but the method for doing it is very standard. This is also a case where the “t” value would be used because the sample size is likely to be small.
So, the short story is this. You could choose the plots to check, if you were willing to assume that the others contained no error at all. Without a valid sample, assuming that the other plots have the same error is just plain wrong.
Originally published July 2001
Back to Contents