Mission Engineering in T&E - Volume 45, Issue 3 | ITEA Journal

SEPTEMBER 2024 I Volume 45, Issue 3

Mission Engineering

Dr. Judith S Dahmann

Dr. Judith S Dahmann

The MITRE Corporation
McLean, Virginia

Gabriel Parasidis

Gabriela I Parasidis

Lead Systems Engineer, MITRE
McLean, Virginia

DOI: 10.61278/itea.45.3.1001

Abstract

The US Department of Defense (DoD) has expanded their emphasis on the application of systems engineering to ‘missions’ – that is, engineering a system of systems (SoS), which includes organizations, people, and technical systems, to provide desired impact on mission outcomes. Traditionally, SoS engineering focused on designing systems or SoS to achieve specified technical performance. Mission engineering goes one step further to evaluate the performance of the SoS in achieving the mission or capability objectives when implemented in a realistic scenario in a simulated environment. Mission engineering not only evaluates whether the SoS has the expected impacts, it also determines whether those impacts result in mission success. The DoD mission engineering methodology applies digital model-based engineering approaches to assess how well a SoS achieves mission objectives and closes mission capability gaps. An approach to implementing this DoD mission engineering methodology was developed and implemented in support of the Rapid Defense Experimentation Reserve (RDER) initiative for the Office of the Undersecretary of Defense (OUSD) for Research and Engineering (R&E). This paper presents the current US DoD mission engineering methodology and describes the approach to apply it to mission engineering analyses in support of the RDER initiative using a notional example to illustrate the approach.

Keywords: systems engineering, mission engineering, systems of systems, digital engineering, operational analysis

Introduction

The US Department of Defense (DoD) has expanded their emphasis on the application of systems engineering to ‘missions.’ This paper provides an overview of mission engineering (ME) as currently addressed by the US DoD, including the definition and motivation for mission engineering, the current US DoD ME methodology as described in the Mission Engineering Guide (MEG) 2.0, how digital engineering (DE) has been used to implement ME in support of the Rapid Defense Experimentation Reserve (RDER) initiative, and the various ways the results of ME analysis are used to drive investments and decision processes. the most recent DoD definition of ME is “an interdisciplinary approach and process encompassing the entire technical effort to analyze, design, and integrate current and emerging operational needs and capabilities to achieve desired mission outcomes [1]. More broadly, the Systems Engineering Body of Knowledge (SEBOK) states, “mission engineering describes the application of systems engineering to the planning, analysis, and designing of missions, where the mission is the system of interest.” [2].

The DOD’s recent interest in applying systems engineering to ME is based on US DoD recognition that while engineering effective defense systems is critical, it is equally important that defense systems work together effectively as an SoS to achieve mission outcomes when deployed in an operational environment. Throughout this paper, the following US DoD terms and definitions will be used.

  • Mission: The “task, together with the purpose, that clearly indicates the action to be taken and the reason thereby. More simply, a mission is a duty assigned to an individual or unit” [3].
  • Mission Thread (MT): The “activities of a given mission approach” [1].
  • Mission Engineering Thread (MET): “How the mission activities related to the actors, systems, and organizations are executed in a specific mission context” [1]. METs are equivalent to ‘kill chains’ or ‘effects chains.’
  • Mission Architecture: “An interwoven effects web, or kill web, comprised of many mission threads and METs” [1].
  • Mission Engineering (ME):“the deliberate planning, analyzing, organizing, and integrating of current and emerging operational and system capabilities to achieve desired operational mission effects” [4]. In other words, mission engineering includes designing and integrating a SoS to achieve mission outcomes and evaluating the SoS’s ability to achieve those mission outcomes in a relevant mission context within a simulated environment.

As shown in Figure 1, ME treats the end-to-end mission as the “system”.

Mission Engineering links Systems of Systems with Mission OutcomesFigure 1. Mission Engineering links Systems of Systems with Mission Outcomes

The US DoD has developed guidance on ME and there is a growing body of mission engineering practice in mission engineering competencies, education and methods [1] [2] [6] [7] [8]. While ME has traditionally focused on Defense missions, the practice of ME can apply to other areas as well, including disaster response or border security, for example. Zimmerman and Dahmann reviewed the challenges that face mission engineering and presented the case for the use of DE to support ME, as well as the engineering of systems, which was the initial focus of the US DoD DE Strategy [9] [10].

As shown in Figure 2, ME is a methodology that delivers engineering results which can be used by capability stakeholders to address decisions. These results include metrics and analysis of the mission architecture’s performance in achieving mission outcomes, and they are generated using robust modeling and simulation (M&S) tools. The results may be used to inform requirements and acquisition decisions, assess alternatives to address mission gaps, inform technology investments, and identify enhanced capabilities, technologies, system interdependencies, and architectures for guiding development, prototypes, experiments, and SoS engineering to ultimately achieve reference missions and close mission capability gaps.

Applications of ME Results (DoD ME Guide 2.0, 2023, p4)Figure 2. Applications of ME Results (DoD ME Guide 2.0, 2023, p4)

Mission Engineering Methodology

The US DoD MEG 2.0 lays out a methodology for implementing mission engineering, which is shown in Figure 3 [1]. In this section, each step of this methodology is described with an accompanying notional example.

Mission Engineering Methodology from the DoD MEG 2.0Figure 3. Mission Engineering Methodology from the DoD MEG 2.0

Mission Problem: ME starts with the ME team clearly defining and understanding the problem to be addressed, which drives the focus and scope of the ME and analysis activity. ME problems may be based on a proactive interest in the mission outcomes of a priority or pressing scenario. They may also be motivated by a reaction to a perceived problem in mission results for the mission, or they may be driven by an interest in the potential for a new or emerging technology or new operational concept to impact the success of a priority mission. The problem to be addressed determines the geographic area, the scenario and vignette, and other aspects of the mission context for the ME analysis.
Throughout the remainder of this paper, a notional mission problem will be used to provide an illustrative example of the ME approach described herein. The notional mission problem includes a scenario where the United States is defending against an invasion from Canada in Washington state.

Mission Characterization: Mission characterization begins with a description of the operational mission context for the problem of interest, particularly the scenario and vignette to serve as the context for the ME analysis. This includes factors such as the epoch (i.e., future time period in which the scenario and vignette take place), physical environment, threat, blue force mission objectives, force laydown and CONOPs, as well as the objectives of the mission, which are the measure of mission success. The mission characterization is typically not developed by the ME team, but is drawn from authoritative source materials, such as joint doctrine, Combatant Commander (COCOM) Operational Plans (OPLANs), policy, intelligence community assessments, etc. This source material is analyzed to extract the information needed to represent the baseline MTs and METs, which are the set of activities and the SoS used to execute the mission in the selected scenario and vignette. This includes the primary targets, platforms and weapons paired with each target type, surveillance and reconnaissance assets, surveillance and reconnaissance processing nodes, command and control (C2) nodes and roles, communications networks, and connectivity between systems. The ME team then uses these data elements and relationships to support the modeling of the baseline mission architecture, which is the starting point for the ME analysis.

The notional example mission can be characterized as the Canadian invasion scenario set in a 2040 epoch and comprised of three vignettes, which include:

  1. An air strike on Seattle to initiate the conflict, which takes place over the course of two days.
  2. The transit of Canadian ground troops into the state of Washington, which takes place over the course of six days.
  3. A ground engagement at the city of Seattle to seize control of its City Hall, which takes place over the course of two weeks.

For this example, the ME analysis will focus on the second vignette and evaluate alternative mission architectures to achieve the mission of disrupting and eventually stopping the transit of Canadian ground troops into the state of Washington.

The United States mission architecture (i.e., systems, weapons, command and control, and communications) for this vignette could be characterized by gathering OPLAN documentation from the United States Northern Command (NORTHCOM) on how they would respond to a ground invasion of Washington state from Canada, policy documentation from the federal government on rules of engagement for a conflict with Canada, and intelligence products from the intelligence community that could describe Canadian plans and capabilities. Using these documents, the ME team would need to identify a measure of mission success for the vignette where the Canadian troops transit into the state of Washington. For this example, a notional measure for mission success could be percentage of Canadian troops that reach Seattle six days after the troops begin mobilizing, with the objective being 25% or less.

Mission Architecture: The ME team then draws on data extracted from mission characterization to develop a baseline architecture model, which includes developing a digital model of the baseline MT and METs with a model-based systems engineering (MBSE) tool like Magic System of Systems Architect (MSoSA) and adding data to complete end to end modeling of the mission using authoritative data from the Services, COCOMs, and ME stakeholders. Examples are shown in the following sections of this paper. The team integrates multiple METs into a digital representation of the baseline mission architecture, which is used as the blueprint for operational analysis of the impact of the baseline architecture on mission outcomes. Once the baseline operational analysis is complete (see section below), and gaps have been identified, the ME team updates the mission architecture models to incorporate alternative approaches and concepts, again drawing from authoritative sources. These sources may come from industry, government, or subject matter experts.

Using the notional Canadian invasion example, the baseline architecture may include a set of US ground and air forces to disrupt the transit of Canadian ground troops into Washington state. These assets should be programs of record that will be operational in the selected 2040 epoch. Documentation of the baseline assets used can come from Tactics, Techniques, and Procedures (TTPs) documents provided by the services, as well as any other data repositories of system performance data that are maintained by the services or other government organizations. For the notional Canadian invasion scenario, one example of an alternative architecture could include the insertion of an autonomous unmanned aerial vehicle (UAV) with an advanced sensor payload. This new technology concept could be employed with the existing systems in the baseline architecture to provide additional intelligence, reconnaissance, and surveillance (ISR) capabilities to detect Canadian ground troops as they transit into Washington state.

Mission Engineering Analysis: The ME team implements the baseline architecture in an appropriate analysis environment, typically an operational analysis simulation, to assess the mission outcomes of the baseline in the selected scenario and vignette. Using the baseline mission metrics and an analysis of gaps and factors which impact mission outcomes, the analysis is then run with the changes that represent the selected alternative concepts or capabilities to assess the mission impact of the alternative architectures.

Using the notional Canadian invasion example, the baseline architecture of US forces and adversary threat of Canadian forces would be modeled in an operational simulation tool. Some examples of these types of tools include the Advanced Framework for Simulation, Integration and Modeling (AFSIM) software and the Ansys Systems Tool Kit (STK). The operational simulation model would be simulated in a set of Monte Carlo runs, and mission metrics would be computed across these simulation runs to determine the mean and standard deviation of each metric.

For this notional example, the top-level mission metric – the overall measure of mission success – is computed across the simulation runs. For this example, that measure of success is the percentage of Canadian troops that reach Seattle six days after the Canadian troops are mobilized, with the objective of 25% or less. Additional metrics are then used to understand the gaps in capability if the mission objective is not met in the baseline. These would include the number of Canadian troops detected and identified by US forces (i.e., is there an ISR shortfall?), the number of US weapons fired, the number of US weapon hits, the number of Canadian assets destroyed by US assets, and the number of US assets destroyed by Canadian forces (i.e., is there a weapons or survivability shortfall?).

Once these metrics and measure of mission success have been computed for the baseline architecture, the alternative architecture, which includes the insertion of the autonomous UAV with the advanced sensor payload, can be modeled in the operational simulation and simulated with another set of Monte Carlo runs. In the case of the autonomous UAV with ISR payload, the hypothesis is that the alternative would improve the ISR picture, enabling increased weapons effects and a higher likelihood of achieving mission success. The same mission metrics and measure of mission success should be computed across these simulated runs to assess whether the hypothesized effects are observed and the alternative improves mission outcomes.

Results and Recommendations: Using the results of the ME analysis, the ME team compares the baseline mission outcomes with the incorporated outcomes of alternative concepts, analyzes the results, and makes data driven recommendations.

For the notional Canadian invasion scenario, the mission metrics and measure of success computed for the baseline and alternative architectures would be compared to assess whether the addition of the autonomous UAV with an advanced sensor made an impact to the mission objective of stopping the Canadian troops from reaching Seattle. The supporting mission metrics, such as the number of Canadian troops detected and identified by US forces and the number of US weapon hits, can be used to inform where specifically the autonomous UAV had impact in the mission.

Implementing ME Methodology

The ME Guide provides an overview of the ME methodology. This section will discuss how this methodology has been implemented using DE and operational analysis in support of the US DoD Rapid Defense Experimentation Reserve (RDER) initiative, as shown below in Figure 4. For context, the ME effort to support the US DoD’s RDER initiative involves evaluating the mission impacts of introducing new technology concepts into a classified DoD mission scenario of interest. Notional examples of new technology concepts include autonomous systems, Artificial Intelligence (AI) and Machine Learning (ML) algorithms, and advanced sensor or weapon technologies. Results from the ME effort can be used to inform development of the new technology concepts, support acquisition and funding decisions, and refine concepts of employment (CONEMPs) for the new technology concepts.

As introduced in the previous section, the notional Canadian invasion scenario will be used to provide an illustrative example of the ME methodology employed for the US DoD RDER initiative. An autonomous UAV with an advanced sensor serves as a notional example of a proposed technology concept for the RDER program. For this example, the goal of the ME analysis is to determine whether the addition of the autonomous UAV and advanced sensor has an impact on the overall mission objective of stopping the Canadian invasion of Washington state.

Use of Digital Engineering and Operational Analysis to Implement Mission EngineeringFigure 4. Use of Digital Engineering and Operational Analysis to Implement Mission Engineering

This ME approach incorporates two complementary lines of effort. On one hand, ME applies digital ME to develop digital representations of the baseline and alternative architectures in a Systems Modeling Language (SysML) model developed with the MBSE Cameo Enterprise Architect (CEA) tool. These models are based on authoritative source materials, such as joint doctrine, COCOM OPLANs, policy, and intelligence community assessments, and provide transparent, unambiguous models of the baseline and alternative architectures, which include integrated METs.

The baseline architecture represents a selected set of integrated METs employed in a given mission scenario and will be used as a standard basis for comparison. For the notional Canadian invasion scenario introduced in the section above, the standard Find, Fix, Track, Target, Engage, Assess (F2T2EA) MT, which is used across the DoD community, would be employed to stop the transit of Canadian troops into the state of Washington. For this example, the activities conducted as part of the F2T2EA MT would be allocated to the US forces that will be employed in 2040 to stop the transit of Canadian troops into the state of Washington. Allocating the activities in the F2T2EA MT to specific US systems in this scenario constitutes a MET. One example of a MET for this notional scenario would be:

  • Find: An existing airborne ISR platform, such as the Boeing RC-135 detects a potential target.
  • Fix: An ISR processing node on the ground receives the potential target information from the RC-135. The ISR analysts cue another airborne platform, such as the Lockheed U-2, to collect imagery on the target. With the target imagery, the ISR analysts confirm that the target is a tank brigade escorting Canadian troops.
  • Track: The RC-135 continues monitoring the tank brigade and provides updates on its location to a US Army Command Post on the ground.
  • Target: The targeting specialists sitting at the Command Post conduct weapon-target pairing. They select the M142 High Mobility Artillery Rocket System (HIMARS) loaded with the Army Tactical Missile System (ATacMS) to engage the tank brigade. The commander sitting at the Command Post tasks the HIMARS operators with firing the ATacMS at the tank brigade.
  • Engage: The HIMARS operators confirm receipt of the attack order, and they fire the ATacMS at the tank brigade.
  • Assess: The commander at the Command Post receives confirmation from the HIMARS operators and tasks the U-2 with collecting battle damage assessment (BDA) data on the tank brigade. Analysts sitting at the ISR processing node evaluate the collected data to determine whether the engagement was successful. The commander uses the analysts’ BDA report to determine whether follow-on actions are needed.

In the notional Canadian invasion scenario, many more sensing systems and engagement systems could be used to perform the different activities in the F2T2EA MT. Each unique combination of systems performing those activities constitutes a MET, and the integration of all those METs would comprise the baseline mission architecture.

The alternative architectures represent the baseline METs with the integration of one or more new technology concepts. The models are reviewed with subject matter experts from the Services and authors of the authoritative source materials, and they are iteratively revised until a consensus is reached on a validated representation of the mission architecture. For the notional Canadian invasion scenario introduced in the section above, an alternative architecture could include integrating a new autonomous UAV with an advanced imaging sensor into the baseline mission architecture and modifying the F2T2EA activities as changes are needed to capture the autonomous UAV’s mission activities. In the context of the F2T2EA MT, the autonomous UAV could perform the activities supporting the find, track, and assess steps. These mission architectures are implemented in operational simulations, which provide a dynamic representation of the mission execution based on data from demonstrations, experiments, operational deployments and test events to assess mission outcomes of the baseline and alternative architectures.

The operational simulations output mission metrics that can be compared between the baseline and alternative architectures to assess the mission impact of the alternative. In the notional Canadian invasion scenario, the operational simulation outputs would be used to assess how adding the autonomous UAV with an advanced sensor impacted the US ability to stop the Canadian invasion into the state of Washington.

The digital ME modeling workflow used to support the RDER initiative is shown in Figure 5 below.

ME Modeling WorkflowFigure 5. ME Modeling Workflow

Mission Architecture Modeling

Developing the mission architecture models begins with modeling the baseline MT that is relevant to the ME analysis. A MT is an end-to-end sequence of tasks, activities and events to execute a mission. The baseline MT can be identified from the data gathered during mission characterization, and the key activities in the MT can be derived from a compilation of doctrine, subject matter expertise, and other authoritative sources. For the RDER initiative, the baseline MT was digitally represented as a SysML activity view, as shown in Figure 6. The activity view in Figure 6 is intended to provide a high-level depiction of the sequence of activities in an example baseline MT. The example MT used in this case is the F2T2EA MT, which is commonly used and referenced across the DoD.

View of the F2T2EA Baseline MT Digital RepresentationFigure 6. View of the F2T2EA Baseline MT Digital Representation

The top-level steps in the mission thread are shown in Figure 6, and a larger view of the activities in the ‘Fix’ step of the F2T2EA MT is shown in Figure 7 to illustrate the type of activities included in this thread.

Detailed View of the Fix Step in the Baseline F2T2EA MT Digital RepresentationFigure 7. Detailed View of the Fix Step in the Baseline F2T2EA MT Digital Representation

Using the same data gathered during mission characterization, the baseline MT activities can then be allocated to the systems and organizations that are used in the mission scenario, which results in the baseline METs. METs (i.e., ‘kill chains’) are “mission threads that include technical details of the capabilities and systems required and utilized to execute the tasks and activities for a mission” [12]. The mission activities and sequence from the baseline MT may need to be tailored for the baseline METs, depending on the systems and organizations used in the mission scenario. METs may be digitally represented with various types of SysML views, and the figures below provide several examples in the context of a F2T2EA MET for the notional Canadian invasion scenario. Figure 8 is an activity view of a baseline F2T2EA MET, where the F2T2EA mission activities are tailored and allocated to baseline systems and organizations from the notional Canadian invasion scenario in swim lanes. Figure 9 is an internal block diagram (IBD) view that shows logical system-to-system connectivity for the baseline set of systems used in the MET. Figure 10 is a sequence view that shows the sequence of data exchanges between systems throughout one execution of the baseline MET.

Activity View Showing Activity Allocations to Systems in Swim Lanes for the Fix Step of a BaselineF2T2EA MET in the Notional Canadian Invasion ScenarioFigure 8. Activity View Showing Activity Allocations to Systems in Swim Lanes for the Fix Step of a Baseline F2T2EA MET in the Notional Canadian Invasion Scenario

IBD View Showing Logical System Connectivity for a Baseline F2T2EA MET in the Notional Canadian Invasion ScenarioFigure 9. IBD View Showing Logical System Connectivity for a Baseline F2T2EA MET in the Notional Canadian Invasion Scenario

Sequence View Showing Data Exchanges for a Baseline F2T2EA MET in the Notional Canadian Invasion ScenarioFigure 10. Sequence View Showing Data Exchanges for a Baseline F2T2EA MET in the Notional Canadian Invasion Scenario

In most missions, multiple METs are implemented and executed concurrently, as shown in Figure 11. The integration of these METs is what constitutes the overall mission architecture (i.e., kill web).

Integrating METs/Kill Chains into a Mission Architecture/Kill WebFigure 11. Integrating METs/Kill Chains into a Mission Architecture/Kill Web

The baseline METs form the baseline mission architecture, which provides a basis for comparing and contextualizing alternative approaches and concepts into the operational environment. As illustrated in Figure 12, Figure 13, and Figure 14, the baseline MET models are updated to represent the alternative MET models, which insert new technology concepts into the baseline architecture, with the intent of improving mission outcomes. Inserting these new concepts may result in changes to the activities, sequence of activities, systems, and/or system connectivity used to execute the mission. Oftentimes, teams proposing new concepts lack awareness of the mission context, which is critical to ME, so the baseline ME architectures play a key role in providing this context for integrating the proposed concepts.

The following figures show changes in the context of the notional Canadian invasion scenario, where the alternative architecture includes the addition of a new autonomous UAV with an advanced imaging sensor.

Activity View Showing Activity Allocations to Systems in Swim Lanes for the Fix Step of an Alternative F2T2EA MET in the Notional Canadian Invasion ScenarioFigure 12. Activity View Showing Activity Allocations to Systems in Swim Lanes for the Fix Step of an Alternative F2T2EA MET in the Notional Canadian Invasion Scenario

IBD View Showing Logical System Connectivity for an Alternative F2T2EA MET in the Notional Canadian Invasion ScenarioFigure 13. IBD View Showing Logical System Connectivity for an Alternative F2T2EA MET in the Notional Canadian Invasion Scenario

Sequence View Showing Data Exchanges for an Alternative F2T2EA MET in the Notional Canadian Invasion ScenarioFigure 14. Sequence View Showing Data Exchanges for an Alternative F2T2EA MET in the Notional Canadian Invasion Scenario

The baseline and alternative digital mission architecture SysML views are implemented using the Cameo Enterprise Architect (CEA) MBSE tool using the MITRE shared modeling framework’ for model development [12], as illustrated in Figure 15.

MITRE Shared Modeling FrameworkFigure 15. MITRE Shared Modeling Framework

This framework, which was developed to support reusability, contains three layers of modeling abstractions. The foundational ‘Seed Model’ includes digital representations of the common elements, which are blueprints for generic systems, such as an aerial or ground platform. The next layer is the ‘Base Model,’ which uses the common elements to specify detailed systems, such as an RC-135 or HIMARS. These detailed systems are used as building blocks to compose SoS that execute the missions in various ‘Mission Models,’ which are the third layer of the framework. The Base Model also includes a top-level MT used across US DoD ME studies, which captures generic activities required for execution of a US DoD joint targeting mission and can be reused and tailored for each Mission Model. The resulting Mission Models capture the baseline and alternative MTs and METs for the mission architectures in a new ME activity. The framework models are developed using doctrine, subject matter experts from the Services, and other authoritative data sources. The mission architectures defined in the Mission Models feed the operational simulations and tools for specific analyses. In the RDER ME approach, the MET models provide a blueprint for the implementation of the mission architectures, including system performance and behaviors in the operational simulations, as discussed in the next section of this paper.

Mission models represented in the shared modeling framework are instantiated in selected analysis tools to address the specific goals of each ME implementation and constitute the architecture products for the ME activity. These mission models represent unambiguous, accessible digital representations of architectures separate from specific instantiations in selected analysis tools (e.g., embedded in simulation scripts). The value of digital representations of mission models (MTs and METs) is rapid generation of consistent, reusable mission architectures across ME initiatives, independent of specific analysis tools and approaches.

Operational Mission Analysis

Operational analysis is key to ME – it provides a quantitative assessment of mission outcomes. ME uses the appropriate modeling and analysis tool for a given problem. As is discussed above, the digital baseline and alternative MTs and METs captured in the mission model provide the blueprints for the operational analysis.

To conduct the ME analysis, the ME team uses a modeling and simulation tool to represent the baseline in the operational context for analysis and generates the baseline mission metrics from the operational simulation. This includes a representation of the operational mission context relevant to the problem at hand, including the operational laydown in a selected geographical region, the threat representation, system performance and behavior, communications, etc. The baseline analysis typically includes an analysis of gaps and factors that impact mission outcomes. This baseline often serves to quantify the ME problem that motivated the ME analysis.

The quantified baseline mission metrics and gaps provide the basis for assessing the new concepts or capabilities introduced in the alternative architectures. As discussed above, the digital mission architecture provides the context for defining the changes in the architecture for each alternative. The ME team represents each new concept and alternative architecture as changes to the baseline operational analysis model. The team then uses the updated models to compute metrics on the performance of each concept as represented in the scenario, as well as the impact on overall mission outcomes. The impact that an alternative concept has on overall mission outcomes in the operational simulation is what drives recommendations on concept development, acquisition, testing, and/or employment. In some cases, new technology concepts may be prototyped and implemented in exercises or experiments and the data collected may be used to validate the results of the operational simulation.

To assess each new concept, the team develops a run matrix, which is comprised of a set of cases with varying conditions. Varying conditions of the scenario allows the team to evaluate the performance of different aspects of the concept, isolate the cause and effect of each change introduced by the concept, and conduct sensitivity analysis around the operating conditions of the concept. An example run matrix for an ME analysis is shown in Figure 16.

Example Run Matrix for Mission Engineering AnalysisFigure 16. Example Run Matrix for Mission Engineering Analysis

The run matrix is driven by the following set of questions:

  • How is the mission executed in the baseline case (mission and supporting metrics)?
  • How is the new concept to be implemented in the scenario (across METs)?
  • What is the objective of the concept (e.g., increased ISR coverage, increased weapons, platform survivability)?  How is this expected to impact the mission and supporting metrics?
  • Under what conditions do we expect the concept to impact mission outcomes (e.g., degraded communications environment)?
  • What are the concept dependencies on baseline systems?
  • What is the performance of each component of the concept?

Data from the operational simulation is first used to compute the overall measure of success and supporting mission metrics to assess the baseline mission outcomes and challenges. In the notional Canadian invasion scenario, the measure of success is the percentage of Canadian troops that reach Seattle, with the US objective being to stop at least 75% of those forces. Supporting mission metrics could include number of Canadian troops detected and identified by US forces, the number of US weapons fired, the number of US weapon hits, the number of Canadian assets destroyed by US assets, and the number of US assets destroyed by Canadian forces. Figure 17, Figure 18, and Figure 19 show visual representations of these supporting mission metrics, which are useful to quickly identify potential strengths and shortfalls in the mission scenario. For example, Figure 17 shows that the US (i.e., “Blue”) assets are not expending all the weapons available in the scenario. There could be several causes of this. One possibility is that the US assets are not receiving enough targeting data to fire all their available weapons. Another possibility is that the US assets have more weapons available than they do targets, so they have no need to use all the available weapons. However, Figure 19 shows that there are more adversary targets than there are successful engagements, so the second possible cause of the limited weapons expenditure can be disproven. Additional mission metrics would need to be examined to understand what the true cause of the limited weapons expenditure is.

Mission Metric Example: Weapon Outcomes for US WeaponsFigure 17. Mission Metric Example: Weapon Outcomes for US Weapons

Mission Metric Example: Successful US Engagements of Adversary TargetsFigure 18. Mission Metric Example: Successful US Engagements of Adversary Targets

Summarizing Mission Metrics: Total Adversary Targets, Detections of Adversary Targets, Weapons Quality Tracks of Adversary Targets, Assigned Engagements, Engagements Attempted, and Successful EngagementsFigure 19. Summarizing Mission Metrics: Total Adversary Targets, Detections of Adversary Targets, Weapons Quality Tracks of Adversary Targets, Assigned Engagements, Engagements Attempted, and Successful Engagements

The alternative mission architectures are then run in the operational simulation under selected conditions that reflect the operational context for the problem addressed by the ME analysis, and the resulting mission metrics and measure of success are used to assess the impact of new concepts on the mission outcomes. Visual plots of the results, such as the examples shown in Figure 17, Figure 18, and Figure 19, can help to understand the metrics driving the outcomes. Comparing these for the baseline and alternative results can show the impact of the alternatives and begin to identify the root cause of an alternative architecture’s mission impacts. The resulting impacts on mission outcomes provide the basis for recommendations on the proposed concepts.

In the notional Canadian invasion scenario, a possible mission outcome for the alternative mission architecture with the new autonomous UAV could be that the additional imaging capability from the autonomous UAV’s sensor is redundant to the capability that is already provided by the U-2 in the baseline architecture and therefore has no impact on the overall mission success. This kind of finding demonstrates the strength of ME analysis in that it evaluates alternatives in the context of mission impacts in operational scenarios. Instead of focusing on validating that a system is built right, ME focuses on building the right systems to achieve mission success.

Summary

As presented in this paper, the US DoD ME framework provides a disciplined approach to assessing the mission impacts of new technology concepts or capabilities. Digital mission architectures, which are comprised of MTs and METs, are developed for a selected scenario and vignette of interest and provide a blueprint for an operational simulation model. MTs/METs are used to represent the baseline mission architecture, which provides a mission context and basis for comparison. They can be further represented by alternative mission architectures, which insert proposed concepts or capabilities into the baseline architecture, with the goal of improving mission outcomes. The use of DE to represent MTs/METs allows for clarity, transparency, and reusability of mission system models, activities, and architectures. ME operational analysis represents the baseline metrics in an operational environment to assess alternative mission architectures and evaluate the mission impacts of the new concepts that they introduce. Based on the mission problem driving the ME activity, ME operational analysis outputs quantitative results in terms of mission metrics to address decisions on technology investments, implementation of new operational concepts, introduction of new technologies, new requirements for systems, or other decisions regarding improvements in mission outcomes.

Mission engineering depends on use of valid assumptions and authoritative data, and this is addressed throughout implementation. The mission characterization is based on authoritative scenarios and data, including credible data on both the threat and on the execution of the mission in the context of the threat, drawing upon intelligence estimates, operational plans, and system performance data drawn from test results and verified contractor specifications. Given the complexity of these missions, the data collected and used in both the digital models and the operational scenarios, iterative reviews with knowledgeable individuals and stakeholders are conducted to ensure the data is being used appropriately for the purpose of the engineering and analysis.

As mission engineering continues to mature, opportunities for collaboration with the test and evaluation community are being identified. T&E data is currently being used to support mission engineering modeling, mission engineering analysis shares metrics with T&E on mission impacts of concepts, and MTs and METs are useful in structuring test events. This collaboration is expected to grow as the US DoD continues to move toward prioritizing a mission perspective across DoD research, development and acquisition.

References

[1] US Department of Defense. Mission Engineering Guide 2.0. October 2023.

[2] R. Giachetti, “Mission Engineering, Systems Engineering Body of Knowledge,” https://www.sebokwiki.org/wiki/Mission_Engineering. (Accessed 5 9 2024).

[3] US Department of Defense. Joint Publication 3-0 Joint Operations. 11 August 2011.

[4] US Department of Defense. Defense Acquisition Guidebook. 16 September 2013.

[5] US Government. National Defense Authorization Act (NDAA) for Fiscal Year 2017, Section 855.

[6] Hutchison, N.A.C., S. Luna, W.D. Miller, H.Y. See Tao, D. Verma, G. Vesonder, and J. Wade. 2018. “Mission engineering competencies.” Proceedings of the American Society for Engineering Education (ASEE) Annual Conference and Exposition, vol. 2018.

[7] Van Bossuyt, D.L., P. Beery, B.M. O’Halloran, A. Hernandez, E. Paulo. 2019. “The Naval Postgraduate School’s Department of Systems Engineering approach to mission engineering education through capstone projects.” IEEE Systems 7(3): 38.

[8] Beam, D.F. 2015. Systems engineering and integration as a foundation for mission engineering. Monterey, CA, USA: Naval Postgraduate School; Beery, P., E. Paulo. 2019.

[9] Zimmerman, P and J Dahmann. Digital Engineering Support to Mission Engineering. 21st Annual National Defense Industrial Association Systems and Mission Engineering Conference. October 2018.

[10] “Application of Model-Based Systems Engineering Concepts to Support Mission Engineering.” IEEE Systems. 7(3): 44; Dahmann, J. Keynote Address: “Mission engineering: System of systems engineering in context.” Proceedings of the IEEE System of Systems Engineering Conference, 19–22 May 2019, Anchorage, AK, USA.

[11] ISO/IEC JTC 1/SC 7 Technical Committee. ISO/IEC/IEEE 15288:2023 Systems and software engineering system life cycle processes. https://www.iso.org/standard/81702.html. (Accessed 5 9 2024).

[12] Pennock, M.J., Driscoll, G.I., Dahmann, J.S., Adams, M. 2022. “Enabling Mission Engineering through a Reusable Digital Engineering Environment.” 2022 IEEE International Systems Conference (SysCon), Montreal, QC, Canada, 2022, pp. 1-8.

Author Biographies

Dr. Judith Dahmann is a MITRE Fellow at the MITRE Corporation and the MITRE project leader for Mission Integration Technical Support activities in the US DOD Office of the Under Secretary of Defense for Research and Engineering. She leads the team supporting mission engineering activities for selected priority Defense missions and the application of digital engineering to mission engineering. She was the technical lead for development of the DoD guide for systems engineering of systems of systems (SoS) and was the project lead for International Standards Organization (ISO) 21839, the first ISO international standard on ‘SoS Considerations for Systems Throughout their Life Cycle’. Prior to this, Dr. Dahmann was the Chief Scientist for the Defense Modeling and Simulation Office for the US Director of Defense Research and Engineering (1995-2000) where she led the development of the High Level Architecture, a general-purpose distributed software architecture for simulations, now an IEEE Standard (IEEE 1516). Dr. Dahmann is a Fellow of the International Council on Systems Engineering (INCOSE) and the cochair of the INCOSE Systems of Systems Working Group and co-chair of the National Defense Industry Association SE Division SoS SE Committee.

Gabriela Parasidis is a Lead Systems Engineer in the MITRE Space Warfighting Division. She applies Digital Engineering to Mission Engineering and Systems of Systems (SoS) Engineering to support Department of Defense acquisition decisions. She has led research in hypersonics, including analyses related to flight dynamics, aerodynamics, aerothermodynamics, and structural loading. She holds a B.S. in Mechanical Engineering from Cornell University and a M.S. in Systems Engineering from Johns Hopkins University.

ITEA_Logo2021
  • Join us on LinkedIn to stay updated with the latest industry insights, valuable content, and professional networking!