Values in Operational Testing

JUNE 2024  Volume 45, Issue 2

Values in Operational Testing

Dr. Robert Holcomb

Retired Army

Executive Summary

The purpose of operational testing is to inform decision makers about the strengths and weaknesses of combat systems prior to incurring the enormous expenses of acquiring them.  This article describes some of the critical values that operational testers should strive to attain as they design their tests and report the results.  Integrity, comprehensive design skills, timeliness and courage are all essential to successful testing.

Having spent over 30 years participating in operational testing, I feel somewhat qualified to write about what the value of operational testing is, and even what some of the values are that are necessary for operational testing.  The purpose of operational testing, of course, is to inform decision makers about the strengths and weaknesses of combat systems prior to incurring the enormous expenses of acquiring them.  Identifying problem areas through testing in operational or combat conditions allows program managers to be able to implement corrections before production and fielding to operational units when the costs of those corrections skyrocket.

The DOD acquisition system and Title X, USC, have significant sections on operational testing.  They lay out the roles of all of the participants and the requirements to be met before acquisition decisions are made.  Sometimes these are faithfully followed, and sometimes they are circumvented; and in either case, they have proven their worth.  The acquisition landscape is littered with the remains of programs that prioritized speed of acquisition over operational performance; Future Combat System and V-22 come to mind.  There are also good examples on the other side, where operational performance was made a priority and subsequent acquisitions proved their worth.  Blue Force Tracker and MRAP in the early 2000s come to mind.  There has never been a significant repudiation of the basic concept, which is “fly before you buy.” 

I am not talking about developmental testing here, which has its own special requirements.  When I think of developmental testing, I think it is intended to answer the basic question “Does it work?”.  When I think of operational testing, I think it is to answer the more nuanced question of “So what?”.  The purpose of the operational context in operational testing is to determine what the benefit of the acquisition is to the fighting force, so that it can be weighed by senior decision makers against the cost of the acquisition. 

The first requirement, in my mind, for the good operational tester, is integrity.  One requires a strong sense of integrity in order to faithfully replicate the operational context, to stand up to the pressures that will always arise to shorten testing, minimize its costs, and whitewash its results.  The tester represents the user in a very real sense, and has to argue and advocate for them in acquisition councils.  The user of the system when it gets to combat is frightfully alone; it is no exaggeration to say their very life may depend upon the system working when the chips are down.  During the long acquisition cycle, the user’s interests (e.g., staying alive in combat or accomplishing their assigned missions) have to be carefully represented by the tester.  Reporting the results faithfully, accurately, and completely requires integrity on the part of the analyst.  So does standing up to the other members of the acquisition community when they value speed over performance.

Testers have to have a finely tuned sense of timeliness and context.  It does no good to produce a perfect system five years too late.  They have to understand what constitutes “good enough” when a system fails to meets its written requirements, or if its requirements happen to be poorly framed.  The operational tester needs to understand the operations his system will be employed in.  How does it integrate into the rest of the weapons of the force?  What sort of tactical conditions will mark its employment?  What sort of improvements do we expect to see in the mission accomplishment of the force, when it is equipped with this new widget?  How shall we design our test to see those improvements?  It is not sufficient to be only a statistician; he or she must also understand how the system will be employed by real users, and what that means in terms of failures and shortcomings.  What is the impact of reliability failures during testing?  A system that works perfectly, but is unreliable, will not work perfectly for very long. 

Designs of operational tests must be comprehensive.  The recent emphasis on Design of Experiments is a good approach to ensure that all important aspects of the operational envelope can be examined during testing, without costing a fortune or delaying the results.  Cutting test units to miniscule sizes might save some money, but it likely also reduces the amount of useful information that can be gleaned from an operational test.  Tests have to be sized to the appropriate level to measure the improvements expected to accrue to the force that employs the weapons.  Units accomplish missions, not systems, so unit or force effectiveness is a valid concept for operational testing.  Merely trying to see if a system works is insufficient; a senior decision maker has to know what value it brings to the force that employs it. 

Sometimes it takes courage to conduct operational testing.  There are many and varied pressures upon the acquisition community to hold down costs and to speed up acquisition.  There has been a constant drumbeat over the last 30 years to speed up the acquisition process.  Sometimes operational testing is decried as the major slowdown to acquisition, yet the amount of time a system spends in testing is miniscule.  Of much more import is the amount of time it takes to fix or improve the flaws in a system uncovered by thorough operational testing.  That time is well spent if it guarantees that weapons will work when the user has to employ them while in harm’s way.  I never saw a combat soldier ask how long it took to produce the weapon in their hands.  I have frequently seen them ask how well it worked, or curse it when it didn’t. 

There is much value in operational testing, both in the monetary benefits of uncovering flaws while it is still cheap to fix them, and in giving weapons to combat troops that improve their force employment.  It requires an analyst who is operationally aware as well as competent in analytics, who has the foresight to design a good test and the competence and integrity to report the results completely and faithfully. 

Author Biographies

Dr. Robert Holcomb received his BS degree from the United States Military Academy in 1973, his MS degree in Operations Research from the Naval Postgraduate School in 1982, and his PhD in Information Technology from George Mason University in 2011. 

Dr. Holcomb retired from the Army in 1993 as a lieutenant colonel after a twenty-year career.  In his subsequent thirty-year civilian career involved in operational testing, he participated in dozens of operational tests, as well as assessments in Bosnia, Iraq and Afghanistan. 

 

 

ITEA_Logo2021
  • Join us on LinkedIn to stay updated with the latest industry insights, valuable content, and professional networking!