Is this definition adequate?

by Kim Iles

I get involved with a lot of groups. Many are not in classical forestry. A few of them have only a vague idea what they are doing, but they absolutely love making definitions about things that are thoroughly fuzzy. They also love making rules and standards for measurements that they have never actually done themselves. Perhaps you know these people.

Definitions first. It may be impossible to deal with these folks, but here is a technique that has been at least helpful. It is really the same philosophy behind the use of statistics. Something is considered precise in statistics if is “repeatable”. The same is true of definitions. A definition is adequate if it can be repeated in practice – that is, under actual conditions, by the people who apply that definition, at the rate those conditions occur, with agreement at some reasonable level. It has nothing to do with dozens of exceptions, rules to memorize, special education, or other issues of that sort – and those other things don’t help unless the process becomes more repeatable when applied. So test it. No test – no proof of adequacy. Simple as that. I don’t care what anybody thinks about the beauty of their definition (and it’s always their definition that is beautiful beyond description).

The problem is usually not to define something, but to recognize it. Don’t lose sight of the objective, or you are dragged through the cactus repeatedly at endless committee meetings. This is especially the case with new, fashionable, and “scientific” situations involving people with too much university time and no field time. Many projects have been stillborn because of a definition negotiation process that never stopped.

You often have an expert who is apparently the only one in the world who can get the right answer consistently (in his own exalted opinion). OK, have him do it many times, without knowing what anyone else would decide at those places or for those items. Having a lot of decisions will mean that he will forget his previous answers by the time you run him over the same set of plots a month (or a season) later.

For an “ecosystem description” for instance, drive a stake into the ground and have him pronounce the answer at that point. If possible, do that in the rain and over actual terrain at several seasons of the year. Let him look around him all he wishes, to whatever distance matters, making any measurements he wants, and by communing with the gods in any way he finds useful. Then he has to decide – not a range, but one answer. Note the time involved. Field crews should not be expected to be faster or more accurate.

Do not let several experts gather at the same point to discuss it, and then pretend that they all agree. If possible, have several experts do it on separate days and without knowing about each other. The results are often horrifying. Sometimes, nothing else will convince them that they are fallible. This also gives you a standard for the work. If the experts agree only 60% of the time, what is the standard for working people to agree with the experts who audit them (perhaps 80% would be too high, don’t you think??).

If they get consistently the same result, the definition is proven to be adequate. “Old Growth” forest might be virtually impossible to define, but if it is consistently recognized, why do you care??

It’s often helpful, when making a definition, to look at odd places and difficult things to define – and that’s OK, but test the method under actual conditions on an actual sample of the situations you are trying to define. If it works 99% of the time, do you really care much about the other 1%, or would you be willing to let that slide as a difference of professional opinion? (hint – that is exactly what the scalers do, and it works). If the check scale passes, they don’t worry about the trivial bits. That philosophy works when testing methods for operational adequacy. OK, it’s true that the scalers have got out of hand with rules lately – but they started out sensibly, and certainly check scale with a reasonable philosophy.

Insisting on a statistical sample lets you avoid the “let’s just look over there as an example” routine that experts use to avoid situations that cannot be defined, are too difficult or uncomfortable, and that involve real field conditions. Set the date well ahead of time, so that knowing the weather report will not make the trial easier (unless you never work in bad weather). If the weather never seems right, when did you plan to do the work?

That set of trials is useful to test if any other definition is better, practically different in affect, or easier to apply. If someone has a great new improvement, you can see if it actually changes the answers. Lots of things pop into immediate perspective when you actually do the work. One consultant I had to deal with discovered that the land base had a slope to it, and the technique had to work there. It was a shock to him.

It might help to have photos of a project where the definitions were tested. If someone says they have a better definition, a quick look at the photos might tell you that only a few of the field situations actually need to be checked. Many calls will be the same. The Old Growth question is one where this technique would be useful. Perhaps the entire project can quickly be retested with only a few field visits.

In the end, no definition is known to be adequate until it has been tested. Unless you arrange to let “the rubber meet the road” the speculation is endless. It’s a cleansing process, and often it gets things moving. In some cases, it’s also very amusing.

Originally published March 2014

Return to Home
Back to
Regular Article Index