2014 ITEA Journal Issues

2014 ITEA Journal Issues

December 2014 – Unattended Vehicle Testing

Dec14 Cov smThe theme for this issue is Unattended Vehicle T&E, and the issue includes a Guest Editorial, an Inside the Beltway feature, a Hot Wash-Up feature, the Historical Perspectives, and five technical articles. In soliciting articles for this issue, I discovered that the choice of a word can make gathering articles more difficult. My choice of a theme with the word “unattended” was a bad one because it can be taken to mean “not noticed or dealt with” or “not supervised or looked after.” I want to thank Wilson Felder, Ph.D. for actively soliciting the great featured articles we have—and for introducing me to an alternative theme for the next time we consider this topic: Uncrewed Aerial System (UAS) T&E.

Our Guest Editorial is titled “The Future of Test,” and Michael Macedonia, Ph.D., states that we will need simulations to test the future systems with simulation components. These future tests will be marked with statistical analyses that develop confidence intervals concerning the key measures we need to evaluate. The results will require decisions based on confidence less that absolute certainty.

Our Inside the Beltway feature is titled “UAS and the T&E Community” from Thomas Irvine. He describes the current efforts to develop a plan for safe integration of UAS operations in the national airspace in the United States by September 30, 2015. This progress depends on fielding and use of integrated flight test environments. Mr. Irvine then discusses the UAS Traffic Management (UTM) system and future UTM demonstrations.

In his Hot Wash-Up feature titled, “Prospects for ‘File and Fly’ Operation of UAS in Civil Airspace, Challenges and Progress,” Wilson Felder, Ph.D., states that military use of UAS continues to grow while public interest in these vehicles has also increased. He discusses the UAS test sites and the establishment of the FAA center of excellence for UAS research. In his conclusion, Dr. Felder predicts that the work already being done will speed the integration of these systems into our national airspace.

Our Historical Perspectives feature is titled “Much More than an Insect Pest – the ‘Kettering Bug’” from Jim Welshans, Ph.D. and provides a look at the unmanned World War I precursor to the modern air-to-ground guided missiles and flying bombs. The Kettering Bug was fielded at a price if $400. While it was never used in combat, in many ways it was the precursor to the modern remotely piloted vehicles of today.

Our first of five technical articles is “DoD Labs, Navy Warfare Centers are Untapped Resources” written by Rear Admiral Mike Moran and Scott O’Neil. The authors state that United States labs and warfare centers are valuable assets that can be better used to support new, enhanced capabilities. Whether in direct support to DoD programs or in partnerships with industry, the labs and warfare centers can help reduce acquisition costs. The authors list and describe the solution processes for three strategic priorities. They state that focused discussion of work, people, and infrastructure is an imperative in these fiscally challenging times.

For the second technical article in this issue, “Safe Testing of Autonomy in Complex, Interactive Environments (TACE),” David Scheidt, et al, state that the advent of autonomous behaviors is here. The authors suggest that test teams should measure the autonomous system’s ability to perform a mission but also should measure the degree to which the autonomous systems satisfy constraints that limit their behavior. And the authors provide approaches to verification and validation of autonomous systems. They propose five test vignettes for autonomous systems under test.

In our third technical article, Charles Gundersen and Jerry Armstrong deliver an overview of their very detailed analysis of “The Mark 6 Magnetic Influencer Exploder” as an article that absolutely fits the theme. The authors took a mountain of captured data from almost 70 years ago, portrayed the data in easy to understand graphics, and helped us understand what may have been missed in Mark 6 system testing. The authors conclude that the proximity exploder may have been shown to be valid, but its application was hindered by a lack of testing. The test of the system in combat illustrated it was a technology slightly premature for the war at hand.

In “A Study of Planned RDT&E Investment versus Cost Growth in Major DoD Acquisition Programs,” Zaw Tun, et al, indicate that the knowledge gained from this study can be used as a guide for acquisition practices to attain desired program outcomes. The authors describe the most influential phase of the acquisition lifecycle, the importance of adequate planning, research development test and evaluation (RDT&E) cost, methodology of the study, and discussion of the results. The conclusion is that late discovery of problems resulting from poor planning in the earlier part of the acquisition phase can be somewhat resolved by RDT&E investment at Milestone B and use of the quantitative analysis methods proposed in this paper.

For our final technical article in this issue, Joseph Nichols, Ph.D., in “Practical Considerations for Testing Intelligent Air Vehicles” describes the need for evolving test techniques to permit evaluation of learning systems in the UAS. Dr. Nichols states that for an intelligent, learning UAS, the same input may not always produce the same result. The author goes on to describe several practical solutions to these evaluation gaps. He concludes that, even with complicated logic and large decision trees, if the UAS is designed with test in mind, intelligent systems can be fully evaluated in scenario-based evaluations.

I hope you enjoy this last issue of 2014 for The ITEA Journal of Test and Evaluation. By the time you receive issue 35-4, the March 2015 issue 36-1 is being finalized. That theme will be “The Right Mix of T&E Infrastructure.” For the next issue, 36-2, the deadline for submissions is March 1, 2015, and the theme will be “Test Methodology Rigor.” Articles of interest include measurement system analysis, upfront analysis, use of statistical methods, design of experiments, and the setting of confidence and calculation of statistical power.

We have posted all themes for the remainder of 2015 and for 2016. Please provide feedback on the choice of themes, and please write early and often. And, I’m still waiting for the first article to be submitted to the Referees.


 

Sept14 Journal Cover smSeptember 2014 – Modeling and Simulation Use in T&E

The theme for this issue is Modeling and Simulation Use in T&E, and the issue includes a Guest Editorial, Inside the Beltway feature, Hot Wash-Up feature, a combined Historical Perspectives and Book Review feature, and seven technical articles. Relative to this theme, the following topics were suggested as discussion points: Is the full power of modeling and simulation (M&S) used today in T&E? M&S allows us to go to places that we cannot otherwise go or to use systems that we otherwise cannot use. Where should we make changes to use less or more M&S in better or different ways? How can we make sure the M&S is good enough for the purpose intended—that is, are there lessons to be learned about verification, validation, and accreditation of M&S or software in general that we should consider? How can we make sure the databases are good enough—that is, are there lessons learned about verification, validation, and certification of databases that we should consider? Have we developed ways to combine live test data with M&S data to provide a more complete model of system performance across a broader operational envelope? Can use of M&S help Developmental Test determine if the system is ready for low rate initial production? Articles were invited in the areas of technology development, policy, leveraging training venues for T&E, success stories of using M&S, and lessons learned from using M&S.

Our Guest Editorial is titled, “Modeling and Simulation Indicators,” and Amy Henninger, Ph.D., provides an article stating that to anticipate trends in Modeling and Simulation (M&S), watch the trends in Information Technology. Dr. Henninger describes the current trends in Information Technology and then discusses the future trends she anticipates in M&S: cloud and virtualization, mobility, big data, and augmentation.

Our lead Inside the Beltway feature is titled, “T&E in the Department of Homeland Security,” with Steve Hutchison, Ph.D., discussing exciting things happening in the Department of Homeland Security T&E. He first introduces the organization of the Department of Homeland Security (DHS), and then he explains that, as the Director of the DHS Office of Test and Evaluation, his objective is to help every DHS program plan and execute robust T&E throughout the lifecycle and bring credible assessments to all acquisition decision events. He then discusses how his organization has initiated a series of changes to improve T&E support to acquisition and tech transition efforts.

In our Historical Perspective feature, “The World of Simulation,” Steve Gordon, Ph.D., discusses some uses and benefits of M&S, provides a brief history of M&S, and then completes a short book review of The Wonderful World of Simulation, authored by Henry “Hank” Okraski.

In his Hot Wash-Up feature titled, “How Modeling and Simulation in T&E Can Help Bring the Acquisition Cycle Back in Line with the Innovation Cycle,” Wilson Felder, Ph.D., states that the acquisition of complex military and aerospace systems faces a crisis. One symptom of the crisis is the growth in cost and time-to-market for aerospace products. After discussing other symptoms, Dr. Felder states that he believes they all reflect a single trend: increasing aversion to risk. He also discusses agile development and modeling and simulation as tools that can be applied to resolve or moderate these challenges.

In our first of seven technical articles, “Flight Test Correlation Modeling and Autoland Monte Carlo Analysis for Aircraft Certification,” Kevin Hougen, et al, state that flight test correlation is a process used to validate an aircraft simulation, and the process is used to support the autoland (automatic landing) requirements for autopilot certifications. This automatic landing maneuver is one of the most critical functions on the aircraft. For these certifications, a Monte Carlo simulation is shown to be a very effective tool over the full envelop of autoland performance, and the techniques developed here provide a much faster turn-around time.

For the second technical article in this issue, George Gardner, III, Ph.D., in “The Difference Between Verification and Validation in Systems Engineering,” suggests a unifying concept in the areas of systems engineering verification and validation. He documents the emerging consensus of definitions of product verification and validation, and he suggests a future analysis of software verification and validation for various software standards and capability models.

In “Characterization and Modeling of Target Coordinate/Elevation Generation Systems,” Jon Hodge highlights the fact that standard statistical methods often fail to adequately generate target coordinates and  elevation of the target. The author develops and describes two curved-earth algorithms that appear to be successful in characterizing targets more accurately than traditional statistical methods. He recommends further study of the modeling algorithms to see if they can accommodate higher fidelity system characterization.

For the next article, Raymond Hill, Ph.D., et al, in “What Department of Defense Leaders Should Know about Statistical Rigor and Design of Experiments for Test and Evaluation” look at increasing the use of mathematical and statistical methods in T&E. The authors concentrate on describing Design of Experiments (DOE) and dispelling some misconceptions and confusion surrounding DOE for Department of Defense acquisition and T&E. They strongly advocate use of statistically based test planning, investigation of past performance from previous T&E, and thoughtful application of statistical science.

In “Common IED Exploitation Target Set (CIEDETS)-Update and New Directions,” Francis Canning, Ph.D., et al, cover how CIEDETS takes numerous inputs of data to calculate the probability of detection for a threat by a specified sensor on a specified mission. But the main purpose of this article is to cover the CIEDETS technology update and to propose possible new directions for CIEDETS, including, perhaps, homeland security as an emerging application. This potential new application would be related to protection of congested population centers with the addition of new sensors, perhaps on drones.

Our next technical article, “Innovation in Testing for the Early Development of Alzheimer’s and Cognitive Impairment,” by Nikhil Patel and Darin Hughes, reports on development and initial assessment of a tool that measures cognitive processing speeds when the subject is presented with simultaneous and conflicting sensory cues (sensory dissonance). These inventors have developed a software tool that administers a cognitive capacity test; the test measures cognitive impair ment using an inexpensive and non-intrusive testing process easily used in a primary care setting. The authors state that early identification (10-20 years before complete onset) of cognitive impairment is possible and can lead to early intervention and potentially a significant delay in the onset of Alzheimer’s.

For our final technical article in this issue, Curt Hohmann, Ph.D., in “A Mathematical Technique for Solving and Implementing the Loaded Catenary Problem for an Aerial Cable,” discusses the aerial cable system at White Sands Missile Range (WSMR) where objects of interest (perhaps models or targets) are moved along the aerial cable. Dr. Hohmann would like to provide a real-time display of the model’s movement along the cable, but that would require an accurate mathematical model of the cable and it’s distortion under the object’s weight. The mathematical model developed appears to model the cable sufficiently well to allow operational testing to be displayed in the control center at WSMR.

I hope you enjoy this third issue of 2014 for The ITEA Journal. By the time you receive issue 35-3, the December 2014 issue 35-4 is being finalized. That theme will be “Unattended Vehicle Testing.” For the
next issue, 36-1, the deadline for submissions is December 1, 2014, and the theme will be “The Right Mix of T&E Infrastructure.” Articles of interest include determining the right mix of T&E infrastructure to assist future program needs, encroachment, benefits from integrated testing, and leveraging commercial test venues.

We have posted all themes for the remainder of 2015, and we are asking for feedback on potential themes for 2016. Please write early and often. And, I’m still waiting for the first article to be submitted to the Referees.


ITEA June2014 coverJune 2014 – Training the Future T&E Workforce

The theme for this issue is Training the Future Test and Evaluation (T&E) Workforce, and the issue includes a Guest Editorial, two Inside the Beltway features, a Hot Wash-Up feature, two Historical Perspectives, and seven technical articles. Test and evaluation over the next decade will need a workforce of professionals from many academic disciplines. The needed academic majors will certainly include science, technology, engineering, and math (STEM); yet, management, communications, psychology, and other types of majors also may be needed for the T&E profession. We will need a steady supply of the right academic majors from our technical schools, colleges, and universities; and we will need initial and currency training for the incoming workforce to be ready to become T&E professionals. Innovative ways to attract the new workforce, provide recurring training to the existing workforce, and fund career enhancement will help T&E attract, enhance, and retain the workforce needed.

Our Guest Editorial is titled “The State of the Acquisition Test and Evaluation Workforce,” and David Pearson provides a snapshot of the state of today’s T&E workforce, discusses how we can ensure the capability of the workforce, and covers the need for career-long training. Mr. Pearson highlights the growing number of training opportunities for the current T&E workforce, which he concludes is a healthy one.

Our lead Inside the Beltway feature is titled, “The Only Constant is Change: Spectrum Stewardship for the 21st Century,” with Derrick Hinton discussing the voracious demand for the commercial broadband spectrum driven by personal and business consumption and the simultaneous increase in federal requirements for spectrum. In light of the public policy pressures to move more spectrum to commercial users, Mr. Hinton then discusses new strategies to handle the imminent spectrum crisis.

For our second Inside the Beltway feature, “Distributed – The Next Step in T&E,” Bernard “Chip” Ferguson and Neyer Torrico discuss the need for a rigorous T&E process and how distributed testing is a methodology that will support testing early and often to assist these T&E processes. They state that distributed testing provides an agile and persistent T&E environment over the lifecycle of the system and is a cost-effective way to provide an agile, persistent test-fix-test capability.

In his Hot Wash-Up feature titled, “How Should We Credential the T&E Workforce in the Future?,” Dr. Wilson Felder, states that credentialing members of the T&E profession is one of the most important roles of a professional society like the International Test and Evaluation Association. Dr. Felder then provides his views on how the future T&E workforce should be trained and credentialed. For him, the credentialing  program for T&E professionals is a work in progress.

Our lead Historical Perspective feature, “Lightweight Fighter Program YF-16/YF-17 Fly-Off,” was provided by the Honorable Pete Adolph and Wade Scrogham. He captures the motivation for the requirement for a highly maneuverable air superiority fighter. Based on the operational need developed from Vietnam War experiences and studies, debates, and commissions that followed, the RFP was issued in January 1972. The Honorable Pete Adolph then discusses the testing program execution, innovations, results, and the decision to select the winner between the two contenders.

For the second Historical Perspective feature, “Missionaries Surrounded by Jeering Unbelievers: NASA Glenn Research Center’s Rocket Engine Test Facility,” Dr. Jim Welshans points out that this facility was built in the mid-1950s and was the largest sea-level testing facility for high-energy rocket propellants in the United States. Dr. Welshans states that the staff of the Rocket Engine Test Facility produced innovative approaches to rocket test facilities and rocket engine design until the facility ceased operation in 1995 and then was razed in 2003.

In our first of seven technical articles, “Developing the Future Modeling and Simulation Workforce with the Skill Sets to be Competitive in a Global Environment,” Henry C. “Hank” Okraski provides a status report on expertise in science and technology in the United States. Mr. Okraski describes the global situation for science and technology majors and the forces influencing current students to select majors that are  not in fields that T&E might require. He then looks at the Florida Modeling and Simulation (M&S) Workforce Pipeline where M&S is taught and used to improve motivation and understanding in STEM courses.

For the second technical article in this issue, Dr. Michael Kendra, et al, in “K-24: Realizing the Full Benefit of Science, Technology, Engineering, and Mathematics (STEM) Investment,” discuss the how and why the Air Force Office of Scientific Research (AFOSR) supports approximately 2,000 graduate students and postdoctoral associates each year. Hence the title that includes “K-24” to signify that STEM continues to be needed through 12 more years of education to produce the expertise needed for tomorrow. And, as Dr. Kendra explains, the AFOSR support for education and learning continues for the T&E workforce well beyond K-24.

In “Air Force Enterprise Effort to Improve the Acquisition Workforce in Testing,” Dr. Darryl Ahner, et al, highlights that one goal of training a more technically sound acquisition workforce is to help that workforce understand the role and benefits of a continuous testing process. Courses for the current workforce are being designed to stress early tester involvement, statistical test design, and a focus on reliability. Testing is a process like systems engineering that supports the program continuously, informs decisions, and also supports verification and validation.

In “Atlantic Test Ranges’ Summer Program Trains Future T&E Workforce,” Theresa Hopkins covers the summer intern program at Naval Air Systems Command at Naval Air Station, Patuxent River, Maryland, for approximately 117 students each year. These summer jobs are competitively staffed, and the students work in engineering, science, and management positions, earning money for needs like the next college year. Here, the future T&E workforce may be the interns working in the summer programs.

For the next article, Michael Harman, in “Understanding Requirements for Test Planning: They Are Our Foundation” looks at issues with requirement statements that are designed to be vague or ambiguous with some lack of detail in the minimum conditions that need to be met. Requirements drive test objectives, responses that need to be measured, and factors that impact the responses. Then, the responses and the factors will drive the test design and the analysis methods. All parts of this process are anchored in the requirements that must be clarified if the analysis is intended to be make conclusions concerning satisfaction of the requirements.

Our next technical article, “Thermal Contributions to the Mechanical High-Speed Sliding Wear,” by Major Rodolfo G. Buentello and Dr. Anthony N. Palazotto, discusses simulation of high speed sliding wear. The sliding wear of an object traveling on the Holloman High-Speed Test Track was modeled with a finite-element simulation, and the simulation results were compared to a previous field (live) test of the object on the track. The simulation, if an effective representation, would help reduce field testing, saving funding and time at many points in development.

For our final technical article in this issue, Patrick Sylvester, in “Field Testing Backflow Prevention Assemblies,” initially discusses why backflow preventers were designed, what they do, and what happens if they do not function properly. The article then covers how these critical devices are tested and how the testers are trained in classes and on field site surveys, with part of their examination being field site surveys to note backflow requirements and correcting unprotected crossconnections.

I hope you enjoy this second issue of 2014 for The ITEA Journal of Test and Evaluation. By the time you receive issue 35-2, the September 2014 issue 35-3 is being finalized. That theme will be “Modeling and Simulation Use in T&E.” For the next issue, 35-4, the deadline for submissions is September 1, 2014, and the theme will be “Unattended Vehicle Testing.” Articles of interest include unattended vehicle testing, control and safety system testing, autonomous behavior testing, UAV ranges, and testing dissimilar mixes of vehicles.

We have posted all themes for the remainder of 2014 and for 2015, and we are asking for feedback on potential themes for 2016. Please write early and often. And, I’m still waiting for the first article to be submitted to the Referees.


132047 ITEA Journal-Mar14 C 

 

March 2014 – How Much Testing is Enough?

 

The theme for this issue is How Much Testing is Enough: How Do We Right-Size Tests?, and the issue is filled with a Guest Editorial, Inside the Beltway feature, an Historical Perspective, a Hot Wash-Up feature, and eight technical articles. The theme was selected because test and evaluation (T&E) takes time and money, and T&E professionals want to test enough to have acceptable confidence in the evaluation of the merits of the system. They also want to test enough to have good probability of determining if the system under test meets requirements and is effective and efficient in the intended operating environment.

What we discuss in this issue will, in many cases, be methods developed by great minds, two of which are depicted below (Carl Gauss on the left, and Sir Ronald Fisher on the right – source: Wikipedia:

carl

ron

Those of us that use regression analysis, Design of Experiments (DOE), control of errors or noise in measurement systems and in procedures, and many other rigorous statistical methods are using the foundational theories developed and facilitated by Carl Friedrich Gauss (1777-1855) and Sir Ronald Aylmer Fisher (1890-1962).

Many of these features and articles in this issue will discuss statistics, scientific methods, experimental design, and rigor. Some of the discussions will highlight the tools that can be used to size the test and structure a rigorous statistical evaluation that will help ensure the system performance is known. Some of the tools will help determine when we can stop testing because we otherwise might risk wasting valuable resources with little return. In the end, someone needs to justify whether the testers have tested enough and someone else needs to validate that claim.

Our Guest Editorial is titled “Taking the Next Step: Improving the Science of Test in Department of Defense Test and Evaluation,” and Dr. V. Bram Lillard discusses ways to use a scientifically defensible methodology to answer the question in the theme. He points out that the full toolset of scientific methods is still not being used for some tests, but he also states that more and more sources of help are being offered to the test teams. Dr. Lillard also proposes the need to institutionalize the test designs and scientific analysis methods, tailored cross all phases of Department of Defense (DoD) testing.

In our Inside the Beltway feature, “Efforts to Establish More Rigor in Developmental Test and Evaluation (DT&E),” Dr. Darryl Ahner and Dr. C. David Brown discuss the development of the Scientific Test and Analysis Techniques (STAT) in T&E Implementation Plan and the stand-up of the STAT Center of Excellence (COE). The COE has recently increased its support from 20 to 25 programs, with the goal of COE support to facilitate a culture of more rigorous DT&E planning in program  offices. The COE is now expanding efforts to collaborate with T&E centers and train T&E personnel in application of STAT.

In our Historical Perspective feature, “Whatever Happened to Good Old-Fashioned DT&E?” Dr. Steve Hutchison transformed his plenary presentation given at the 2013 ITEA Symposium into an article detailing the changing focus of developmental test and the long history of Developmental Test and Evaluation (DT&E) over the past 40 years. Dr. Hutchison discusses the motivations for establishing and re-establishing developmental test oversight offices, and he discusses the leaders that established and served in those offices to improve systems delivered to our warfighters and to ensure the funding provided by the taxpayers was well spent. Dr. Hutchison proposes that DT&E should ensure readiness for production by stressing early appropriate involvement in acquisition programs. DT&E will be key to improving acquisition outcomes.

In his Hot Wash-Up feature, “How Much Testing is Enough? Today’s Highly Distributed, Digitally Rich Systems- of-Systems Change Our Answer to the Question,” Dr. Wilson Felder discusses his thoughts on how changes in the make-up of weapons systems and command and control systems has changed the way we can answer the question in the theme. Interconnected, digital systems supported by increasingly sophisticated digital processing are changing the problem. He proposes that these more complex systems-of-interconnected-systems are changing how we test and how we can predict when to stop testing. Maybe testing — or at least verification and validation —can never stop. Dr Felder proposes specific actions that can help us better handle these complex testing challenges.

In our first of eight technical articles, “Scientific Methods for Improving DoD Test and Evaluation — Statistical Test Optimization Synthesis Panel,” by Dr. Laura Freeman, et al, frames the background of DoD’s increasing emphasis on using scientific test and analysis methods. The authors then discuss the characteristics of DOE and how DOE enables planners to maximize the knowledge gained while adhering to resource constraints.The article then summarizes types of designs, integrated testing with sequential experimentation, short summaries of case studies in DOE, and keys to success for future DOE analyses.

In “Employing Power to ‘Right-Size’ Design of Experiments (DOE),” Mr. Mark Anderson and Mr. Patrick Whitcomb highlight how DOE uses power calculations with trade-offs to right-size tests. In general, power is the probability of detecting a significant effect — but this power estimation is dependent on how certain other parameters are set. The authors discuss how the confidence, shift in the output (signal) desired to be detected, noise in the system (standard deviation of the output), and sample size of the DOE all affect the power. Once these parameters are set, the tester can ask the DOE software to estimate the power, and if the power is too high or too low, the tester can make adjustments to any one or more of the parameters to rightsize the DOE. When power is too high, it may be perfectly fine to reduce sample size. If the power is too low, some significant effects will probably be missed, and the usefulness of the test would be in doubt.

In “The ‘E’ Before the Efficient and Rigorous ‘T’: From Developmental Evaluation Framework to Scientific Test and Analysis Techniques Implementation,” Dr. Suzanne Beers, et al, discuss why and how DT&E planners should follow a logical process to set up their testing. The authors propose that if testing is designed to address risk, inform decisions, and characterize decisions, then the design of the evaluation framework should be part of the upfront analysis. The suggested process is to frame the evaluation, and then plan the best, most rigorous and efficient test. This consideration of the evaluation framework early in the test planning allows a more disciplined test with the evaluation end state clearly captured in the test plan. Most testers can recount the time they were given a test to finish just before evaluation with a few hours or days left in the test program, minimal funding, and three test objects to evaluate…oh, and we need good statistical power.

the author, Dr. Tom Roltsch, has a focus similar to the previous article. Dr. Roltsch shows that it is essential in early test planning to determine what type of data the test will collect and how that data will be used to answer the key questions about the system under test. He demonstrates the use of tolerance intervals to help testers add value and efficiency to the testing process. Dr. Roltsch states that one way testers can master sample size determinations and therefore help predict and control test costs is through the use of tolerance intervals.

In “Performance Testing and Evaluation of Transformative Apps Devices,” Mr. Anthony Downs, et al, discuss the current “TransApps” program. The National Institute of Standards and Technology (NIST) is responsible for designing and implementing the tests for approximately 60 software applications on smart phones, including looking at hardware-software interactions. The ultimate goal of this program is to find ways to enhance the warfighter’s effectiveness on and off the  battlefield with a flexible and secure suite of apps with rapid fielding of new capabilities and updates. The NIST team has developed comparative test methods for handheld device components, and their efforts have led to more rapid fielding of key technology.

For “Instrumentation Advances for Thermocouple Attachment to Composite Structures in Extreme Environments,” the authors, Mr. Christopher Allen and Mr. William Boles, propose standard questions that can guide testers in selecting the appropriate sensor(s) and attachment method(s) since both play a role in gathering  the most accurate data. The authors look at a specific attachment technique that maintains a constant bond on the composite without significantly influencing the oint estimate; the consistency of bond therefore enhances repeatability of measurement. The new bond attachment technique is the standard in-house practice in high-temperature ground tests by their team and has been tested up to 3600°F.

In “Unified Modeling Language Interpretation and Test Suite Generation for Complex Systems,” the authors, Mr. Aditya Akundi, et al, demonstrate the use of Unified Modeling Language (UML) diagrams to help systems engineers develop architectural products as well as generate optimal test suites. These UML diagrams  can be used to show how the DoD Architectural Framework (DoDAF) applies object-oriented design principles to benefit systems engineering. Mr. Akundi expands on the methods described by using an example test suite from the Airborne Warning and Control System of Systems program.

For the final technical article “Is Agile Right for Hardware?” Ms. Gina Parodi de Reid describes the advantages and disadvantages of using agile — traditionally for software development — processes for hardware development. She highlights the fact that regardless of whether it is applied to hardware or software, the agile process requires the right set of skills be present in the team. Ms. Parodi de Reid also recommends that the agile process be tailored for use on a hardware program. Her paper illustrates the steps taken to integrate hardware into the agile process.

Happy New Year! I hope you enjoy this first issue of 2014 for The ITEA Journal of Test and Evaluation. By the time you receive issue 35-1, the June 2014 issue 35-2 is being finalized. That theme will be “Training the Future T&E Workforce.” For the next issue, 35-3, the deadline for submissions is June 1, 2014, and the theme will be “Modeling and Simulation Use in T&E.” Articles on use of modeling and simulation (M&S) for test, M&S verification and validation methods, and use of M&S to augment live testing are requested.

We have posted all themes for the remainder of 2014 and for 2015. Please write early and often. And, I’m still waiting for the first article to be submitted to the Referees.