The Architecture Analogy in Test Planning: An example of the T&E value of ‘Well-Planned’

June 2024 I Volume 45, Issue 2

The Architecture Analogy in Test Planning:  An example of the T&E value of ‘Well-Planned

Mr. Steve Woffinden

System Engineer General Dynamics

Introduction

They say that wisdom comes with experience.  You can find a plethora of quotes to document this process.  Two useful ones I have found are:[1]

By three methods we may learn wisdom: First, by reflection, which is noblest; Second, by imitation, which is easiest; and third by experience, which is the bitterest.

― Confucius

Wisdom comes from experience. Experience is often a result of lack of wisdom.

― Terry Pratchett

 In 1988, following graduation from the Armed Force Staff College, I became a member of the recently formed Army Acquisition Corps.  My assignment as a new acquisition officer was to be the program manager for the communications systems in the Supreme Headquarters, Allied Powers Europe’s Primary War Headquarters, in Mons, Belgium.  The Defense Systems Management College had not started classes, so I learned “on the job”.   While I had the usual university “testing experiences” through lab projects and a thesis, I was now being introduced into the world of live development, testing and deployment.  Later, during my two assignments on the Army Staff, I had more experiences and gained more wisdom. 

A conference talk that expanded the horizon

During the 1990s, I had the opportunity to attend a conference at the Naval Postgraduate School.  One of the presenters was Tom DeMarco, a well-known name in the structured design world of the 1980s and 90s.  While the details of the conference and other presentations have faded, his presentation made a lasting impact on me.  The message I took can be summed up by the following:You will have an architecture. You have no choice.  You will either have an architecture that you design, control, and understand, or you will have the architecture that is the result of all your ad hoc decisions along the way.

Steve

The Application of that Message to Test and Evaluation

 Since my retirement from the Army, I had a four-year job doing government business development for a small artificial intelligence applications company, followed by over 22 years doing systems engineering for a major defense contractor.  Most of my systems engineering experience is from projects supporting the Operational Test and Evaluation and Army Training communities.  This brought me into my continuing association with the International Test and Evaluation Association.  During this time, I was given an opportunity to mentor students in engineering design courses.  It was there I saw the corollary to the message from Tom DeMarco and started presenting it during sessions on testing.  That corollary is:

You will be tested.  You have no choice.  You will either have a test strategy that you plan, control, and implement across the entire lifecycle, which includes both your customers (those who pay for the work – Developmental Test) and those users of your system (often different from the customers – Operational Test), or your work will be tested through their attempted use of your system, often to your great regret.

Some Observations on the Corollary from Experience

 The following observations are given as generic comments on the impact of the Corollary in practice.  While there are specific examples that could be called out, it was felt this would best state the case without having to have many reviews.

Student Projects in the Academic World

The first observation comes from student projects in the academic classrooms and labs.  Much has changed since the lab projects of the early 1970’s.  The changes in the technological capabilities of devices are the easiest to observe, but more importantly, there have been changes in the scope of the learning objectives.  I have had the privilege of doing peer reviews of student projects in the engineering semester labs for more than ten years.  In addition to the more traditional course requirements which specify components to use in the project, student teams are asked to identify stakeholders and then use the components to build a prototype capability to meet the needs of the stakeholders.  The resulting completed prototypes are presented by the student teams at an innovation showcase event held near the end of each semester.  Student teams often take the same design through follow-on lab courses to refine their product.  While this is an outstanding improvement to engineering education, there is a tendency for students and faculty to play too much of a stakeholder role.  Most are receptive when introduced to viewing test and evaluation as a combination of verification (meeting the explicit and derived requirements in a contractual (get a good grade) context) and validation (delivering a solution that meets the needs of the users).  Designs must iterate through the triple context of the customer (who pays to get requirements met), the developer (who wants a good grade), and the actual user of the product (often neither of the first two).

Observation: Academic courses need to help students prepare to include end users in all projects, with both verification and validation test and evaluation events.  After their education, the reality of the corollary will become part of their experience.

Classic Systems Engineering “V” versus Agile Development

 This observation comes from watching and participating in the move to “Agile”.  The purpose of the systems development “V” was (and is) to bring rigor and completeness to the process of going from initial needs being identified, through design, development, and deployment, then across the useful life of the system.  Agile systems development predates the Software Agile Manifesto of 2001 by more than a decade.  At the Air Force Institute of Technology, it was taught as evolutionary development and programming during the early and mid-1980s.  The development “V” usually showed specific test and evaluation events on the trailing edge, though much was said about the importance of ensuring testing was not done too late to influence the design.  With the emphasis on being agile that has come in the 2000s, the cycle of build some, test some, build some and deploy useful capabilities through Epics, Stories, Backlogs, and Sprints with less formal test and evaluation during the cycles, and increased end-user participation has helped address failure to meet user expectations.  There has been an increase in recent military acquisitions of having competitive prototypes/first articles, developed within an Agile Framework, followed by comparison, testing and evaluation of those prototypes to determine which system is selected for the formal acquisition process and delivery.

Observation: The application of developmental test and evaluation (verification) and operational test and evaluation (validation) within the competitive acquisition programs has helped apply the first part of the corollary and avoid the second, but there are sadly still too many systems that fall short of the validation expectations and needs of those that will depend on those systems.

Roles of Digital Engineering and Digital Twins

It has been interesting to observe the evolution of digital engineering.  Much is seen in the media about digital engineering and digital twins. Most of that published material is related to the benefits of using either or both during the design and development of systems, with emphasis on what the authors (or vendors) are advocating as the best tools.  Less common, outside of International Test and Evaluation Association Events, are discussions on the use and benefits of these tools and techniques applied to test and evaluation.  Experience gained working for a small artificial intelligence tool company, with tools and applications used for real-time control systems, provided an early, and in many ways, more useful definition of a digital twin than is currently used in the presentations.  The control applications I experienced produced models (with appropriate levels of abstraction defined) and tools to both simulate (execution of the models) as well as run in parallel with live systems.  This provided the control application in use, and the digital twin in parallel for continuous improvement.  The tools allowed subject matter experts, in the specific problem domain, to build their own models with interfaces and ways to interact with the digital twin. 

Observation: The subject matter expert was, by definition, a user of the application.  They then were a merge of the developer and user.  The customer was in a support role.  This kept things in the first segment of the corollary with the second part being addressed by determining and applying the classic goals and methods to test and evaluate rule-based expert systems.

Conclusion

The true value of Test and Evaluation comes from a systems thinking view, where the purpose of the system is aligned with both the stakeholders who make up the customer base, and the users, who actually expect and need the system to make their lives better, safer, and more productive.

Author Biographies

Duard Stephen Woffinden (Steve) served over 22 years as an Army Signal Corps Officer in a variety of tactical and operational assignments. Key assignments related to this paper include Instructor, US Air Force Institute of Technology, Department of Electrical and Computer Engineering (1984-1988); Program Manager, Primary War Headquarters, Supreme Headquarters, Allied Powers Europe (1988-1989); and Director, US Army Artificial Intelligence Center (1994-1997).  He has a bachelor’s degree in electrical engineering from Utah State University, and a master’s degree in computer science from Naval Postgraduate School.  He currently works as a systems engineer in General Dynamics – Mission Systems.

[1] https://www.goodreads.com/quotes/search?utf8=%E2%9C%93&q=wisdom+and+experience&commit=Search

ITEA_Logo2021
  • Join us on LinkedIn to stay updated with the latest industry insights, valuable content, and professional networking!