ITEA provides a variety of professional development formats — including online and face-to-face learning — to help you to maintain, develop or increase your knowledge, problem-solving, technical skills or professional performance standards. These course offerings are also directly mapped to our CTEP certification and the 4 domains that have been identified by our BOE. Designed to help train new T&E’rs or fill the knowledge gaps for those in need, our courses cover the spectrum.
ITEA also provides options for bringing live courses, which can be can be tailored to the needs of your organization, to your location which can reduce your training costs up to 50%. For more information on hosting a course on your site, please contact us at email@example.com.
Find us on SAM.gov
Professional Development Short Courses
The identified course offerings follow and contribute to the knowledge of the 4 subject domains listed below and identified by the Board of Examiners who created the CTEP exam.
Domain I: Test and Evaluation Planning
Domain II: T&E Design
Domain III: Test and Evaluation Execution
Domain IV: Test Data Analysis, Evaluation and Reporting
Cyber Security and Cyber Resiliency
This two-day course has been designed for the system engineer, program manager, and IA manager. This course is positioned as a mid-level introduction to cybersecurity and information assurance, and it covers a variety of topics in these areas. High-risk and labor-intensive processes such as security test & evaluation, and certification and accreditation procedures are covered in detail. IA risk management is covered across the spectrum of system, C&A, program protection and platform risks, illustrating a useful method of aggregation for comprehensive understanding of IA risk. The course concludes with a detailed exposition of secure network design and construction principles and techniques that can be applied immediately to existing and new networks and systems.
The course is fully updated with the latest information on the DoD’s treatment of cybersecurity. This includes the new implementation of the Risk Management Framework (RMF), the replacement for DIACAP. The course will cover the new processes, the differences between new and old processes, and methods for accelerating both risk management and risk acceptance. We will use a detailed example to illustrate how to implement, monitor and test the methods, and we’ll look at risk aggregation as an avenue to understand system of systems risk, collective (control) failure modes, and aggregated system accreditation.
Fundamentals of T&E Processes
The course will describe the key principles of T&E as a critical part of systems engineering and acquisition. The current world of T&E has evolved over the last 5 decades from a slogan mantra (“try before buy”) to a set of widely accepted principles and integrated practices. Industry and government experience has produced processes that enable T&E to be a dependable indicator of progress towards achieving system performance objectives and capabilities during a development program. In the course, you will learn about the key policies and practices that have been adopted, what has triggered them, and what the future holds in this critical arena.
Operational Design of Experiments (OPDOE) (now additionally offered online!)
This course will provide the practitioner with the ability to apply the best tools and methods from combinatorial testing and DOE. It will cover the key terminology of DOE and various options to testing, showing why DOE is the most effective and efficient testing approach. This course will cover the activities that must precede a DOE, including the first line of defense against variation and Measurement System Analysis (MSA). Testing strategies, such as screening, modeling, and confirmation, will be discussed along with how they fit into an integrated developmental and operational testing strategy. The 12-step approach to experimental design will be presented to provide a framework for adequately considering all aspects of the test. Basic graphical and statistical analysis of experimental data will be covered. The concept of and need for looking for variance shifting factors will be presented, along with screening designs. Response surface designs such as Box-Behnken and Central Composite Designs will be shown to be more efficient than factorial designs for modeling non-linear responses. Simple Rules of Thumb will be provided for sample size and design selection, along with determining significance and power. Interpreting regression output and the coding of factors and their levels, along with residual analysis, will facilitate the analysis of data not collected under a DOE strategy and provide a means of analyzing data coming from multiple test scenarios. High Throughput Testing (HTT) will provide a combinatorial testing approach that is extremely useful in operational testing when there are many factors, both qualitative and quantitative, each with many levels. Latin Hypercube Sampling and Descriptive Sampling will be shown to be very useful space-filling designs in high dimensions when only a limited number of tests can be conducted. Nearly Orthogonal Latin Hypercube Designs will be discussed and will provide the practitioner with power in screening many variables, such as is the case when dealing with high fidelity simulation models from which low fidelity models can be developed for prediction and risk assessment purposes. This course will cover many examples in the world of test and evaluation and give the student practice at test design and analyzing test results. It will provide the practitioner with the ability and rationale to make good decisions when conducting both developmental and operational tests under a wide variety of circumstances. DOE will be shown to be the science of data collection as it applies to testing and that it must be in the toolkit of every tester.
What T&E’rs Need to Know about Program Management and Systems Engineering and Why
Test and evaluation have too often and too long been perceived by many practitioners of these disciplines as stand-alone processes. Nothing could be further from the truth, as they are the foundations of developing the knowledge required to conduct effective and efficient program management and systems engineering. Therefore, testers and evaluators must understand, speak the language of, and properly integrate with the needs and processes of their major customers, the program managers and systems engineers.
This three-day course presents a basic overview of key program management processes such as leadership, planning, monitoring, control, work breakdown structure, scheduling, budgeting, contracting, and earned value management; and key systems engineering processes such as requirements analysis, functional analysis, partitioning, design, risk management, trade studies, and concurrent and specialty engineering.
This course also includes discussion of some developing engineering challenge areas such as software engineering and test, human systems engineering, autonomous systems development, and cyber engineering and test. Day 1 discussions cover the basic concepts of program management and systems engineering discussed above. Day 2 is devoted to discussions of the role of test and evaluation in program management and systems engineering, unique aspects of all three disciplines in the Federal Government and DOD, and some case studies of notable DOD acquisition programs. Day 3 discussions examine some interesting and important special topics in DOD acquisition as well as providing a look at the future of DOD acquisition. All of the above subject areas are presented with a perspective that will help ensure that testers and evaluators become better informed and more effective members of any development team.
Probability and Statistics for Reliability/Reliability Growth
This five-day course covers the concepts and methods to improve a reliability program across the
acquisition life cycle. The focus is on both the proactive approach of designing reliability into the system
up-front, i.e., Design for Reliability (DFR) and monitoring reliability improvements through a reliability
growth process. Students will be able to construct the reliability growth curves now required in Test and
Evaluation Master Plans for major acquisition systems. This course will provide T&E employees with the
basic training needed to understand how reliability methods are implemented in T&E. More specifically,
students will understand the Identify, Design, Optimize, and Validate (IDOV) phases of DFR and will
• Know why reliability is an important metric in today’s business culture
• Know the three basic components of dependability: reliability, availability, maintainability
• Know and be able to describe the major components of the definition of reliability
• Know why testing for failure is the only way to confidently measure and predict product
• Be able to set up, optimize, and interpret the results for reliability
• Understand the three different types of distribution parameters and what they mean
• Know what a probability distribution is and be able to interpret the parameters of selected
• Be able to model failure data using selected probability distributions
• Be able to choose the best distribution for a set of data using curve fitting tools
• Understand the criteria used to evaluate the fit of a distribution
Statistical Methods for Modeling and Simulation Verification and Validation
This three-day (24 hours) course introduces the students to the statistical tools and methods needed for
the verification and validation of models, including simulation models. Basic statistical concepts will be
introduced and then the course migrates quickly to multivariate analysis, showing how one best builds
statistical models for predicting performance measures based on multiple predictor variables. The
course emphasizes the process for modeling and simulation verification and validation. Exercises and
hands-on activities will demonstrate and reinforce the concepts.
The Following modular offerings are 1/2 day
T&E in Software Life Cycles
There are two major categories of software development processes in use today: the software development lifecycle (SDLC) and agile software development. In SDLC, programmers work in parallel, developing modules which will then be combined into subsystems, eventually completing the entire application. In agile software development, programmers work in cross-functional teams, developing the critical path through the application first, then adding features on a priority basis until the application is completed. This module will describe how software testing fits into each of these two categories of software development, the testing methodologies involved, and who is responsible for software testing at various stages of development. A key consideration is the “shift left” philosophy in which testing, in various forms, is performed beginning at the requirements development stage and continues through all stages of development, in both development categories.
T&E Levels and Techniques
An alternative to producing functional software test cases from requirements is to generate them using algorithmic black box and white box test case generation techniques. Black box concepts and techniques include equivalence class testing, boundary value testing, domain analysis, decision table testing, pairwise analysis and testing, and state transition testing. White box concepts and techniques focus on code coverage and control flow testing. In addition, this module will discuss exploratory testing and will go beyond functional testing to include performance testing. This module will also discuss such levels of testing as unit test, integration test, and systems test in software development lifecycle (SDLC) software development and compare this to testing in the agile software development space.
Tester Traits, Relationships and Conflict Resolution
This module explores the human side of testing. It is well known in the information systems industry that people with specific personality traits fit into different kinds of job categories. This module will compare the required traits of testing personnel in the SDLC environment with those in agile development, It will discuss the organization and management of testing personnel in the two software development environments. It will 2also go into conflict resolution between testing personnel, others on the software development team, and users. The relationship between testers and users will draw special attention. Are testers advocates for users or a buffer between users and developers? A special focus will be on the interaction between testers and others in the inspections and review of requirements, code, and various other software development artifacts. Another issue that will be discussed is the role of testers in conflict resolution when disputes occur in such issues as the prioritization of defect correction.
Test Planning and Management
Domains I,II,III, IV
Software testing, whether in the software development lifecycle (SDLC) environment or in the agile development space, must be planned and properly managed. The planning must take into account the expectations of the product owner, the testing resources allotted, the risk associated with the application under development, and the impact of potential defects in the software. Planning must include planning for functional testing as well as for non-functional testing such as performative testing, security testing, user interface testing, and globalization testing. The test management must include the organization of the testing personnel and their interaction with others on the development team. This module will discuss when testing personnel are to be inserted into the software development process, whether testing is to be based only on requirements or will also use algorithmic techniques, how defect management will be handled, and whether test automation techniques will be employed. Further, it will discuss how testers can participate in development efforts in which some or all development is performed by one or more outsourcing vendors, and how contracts with such outsourcing firms can be written.
From Requirements to Test Cases
The philosophy of software testing should move towards a “shift left” concept, meaning that considerations of functional software testing should begin at the requirements stage of systems development. This includes the process of Test-First Development (TFD) and, possibly, its extension to Test-Driven Development (TDD). This module will discuss the benefits of functional software test cases being created at the time requirements are written by cross-functional teams consisting of users, testers, lead programmers and others appropriate for the particular project. Furthermore, it is essential that a similar cross-functional team conduct a thorough review of the requirements before they are released into development. This applies whether development will be conducted using the software development lifecycle (SDLC) method or the agile software development method. To this end, this module will include a discussion of the different levels and methods of requirements reviews, such as inspections, “reviews” (a different use of this word), and walkthroughs
Defect Management & Risk-Based Testing
The discovery of a defect in software code is only the beginning of a process to correct it. The defect and a substantial amount of information about it must be recorded. It’s 3impact and risk must be assessed and a priority relative to other discovered defects must be assigned. Then a plan to resolve the defect must be put in place. Finally, the code in which the defect was discovered must be retested. This module will explain all of these defect management steps, including the metrics involved and the available defect management software. It will also discuss the “S-curve of software testing” and the ”zero bug bounce” concept. Risk-based testing begins with the recognition that exhaustive testing of software is not possible and that the goal must be to find and then fix the most important defects as early as possible at the lowest cost within the practical limits of the testing resources available. This module will deal with the sources of risk, risk metrics, the consequences of ignoring risk, and measuring the impact of risk. It will also include original research on the relative value of software test cases based on risk and cost so that the set of test cases chosen within the limited testing resources will be as effective and efficient as possible.
Security Testing Foundations
With today’s sharp focus on information security, it is important to consider all types of threats to computer systems and networks, and then to delve into potential weaknesses in software, how to recognize them, how to prevent their being used to compromise the systems and, finally, how to test them to determine whether any weaknesses are still present. So, we speak of attack surfaces and attack vectors, including the various kinds of malware, and all manner of attack tactics, techniques, and procedures, including the “OWASP Top-Ten”. This module will provide the background as just described, and then go into information security test techniques and procedures. Security testing terms and techniques discussed include threat identification and analysis, attack surface analysis, blue team/red team testing, penetration testing, cyber kill chain, and specific software testing techniques for security. In addition, such topics as security testing planning, security testing teams, threat intent, threat risk and threat prioritization are included.
The term interoperability can be applied to information systems in several ways. One common usage is as a set of standards to enable the interconnection of dissimilar computing devices, as in the OSI Model. But another usage of the term interoperability is in the interaction among software applications that function as a complex system by passing data to each other. While testing individual software applications can be challenging, testing systems of interoperating applications can become a daunting task. This module will explore the issues of and possible solutions to the problems involved in the test and evaluation of interoperating software applications. It will include the issue of conformance testing, i.e., assuring that the software conforms to established standards, as well as interoperability testing in the sense of verifying that two software systems work together in the intended way. Further, it will discuss issues of interoperability testing at the module level in standard software development lifec