JUNE 2025 I Volume 46, Issue 2
JUNE 2025
Volume 46 I Issue 2
IN THIS JOURNAL:
- Issue at a Glance
- Chairman’s Message
Workforce of the Future
- Encouraging Diversity in AI Test and Evaluation
Technical Articles
- Model Based Test and Evaluation Master Plan Technical Introduction
- Integrating RAG, HCD, and PD in MBSE for Mission Problem Framing
- Then What? The Need for Iterative Assessments to Achieve Successful Operational Capabilities
- Surpass the Adversary: Enhanced Mission Training through Digital Engineering
- Adaptive Algorithms for LIDAR Semantic Segmentation on Edge Devices
- 2025 AI in T&E Forum
- UC UK ITEA Event Summary
- AI and ML Methods in Verification and Validation
News
- Association News
- Chapter News
- Corporate Member News
Abstract
The global security environment is rapidly degrading, and recent conflicts have reaffirmed the need for the rapid acquisition of cutting-edge weapons systems. The fielding of these systems requires training and simulation technologies that are aligned with the tactical employment of the equipment. Despite this need, current military training remains disconnected from modern strategies, such as the multi-domain battle doctrine. Training delivered to support weapons systems tend to focus on user-operation rather than integrated employment. This results in a preparedness gap that means personnel are unable to use their leading-edge equipment effectively in contemporary conflicts. This paper examines the use of conceptual modelling to parameterise training systems and mission requirements as objects represented by attributes of performance and function. By applying digital engineering and mission engineering techniques, we analyse the attributes in the mission thread design, and build a set of relationships between the training system characteristics and the mission attributes. These relationships are quantified using probabilities to articulate the likelihood of the training system’s ability to fulfil the mission objectives. The training model is then optimised in a way that positively influences the performance and function of the mission thread, and therefore increases mission success. The training model is then transformed from a conceptual framework to an applied trainer or procedure. This training system then becomes the basis for validation through supportability test and evaluation, thereby substantiating the original training needs. A case study was used to assess the defence of commercial shipping against an attacking USV swarm. A training model was designed and injected into the USV mission thread to find areas where the likelihood of effectiveness could be increased. The results found that training focused on improved decision speed, collaborative combat, accurate and rapid interpretation of information, and the application of networked sensors, all contributed to increased mission success. The techniques developed in this research will allow training to be more agile and responsive to new mission requirements. The research in this paper was presented at the 2024 I/ITSEC Conference in Florida, USA.
Introduction
The first two decades of the new century have seen an unprecedented shift in both the nature of warfare and the technologies that have been brought to bear. This change in the global strategic environment has driven an arms-race of sorts, one that seeks to leverage advantages in cutting edge technology and capabilities (Sokolski, H.D., 2012). The recent conflicts in both the Ukraine and Gaza, have provided an unofficial testing ground for these new capabilities, particularly in the area of autonomous systems. Although Western democracies are yet to fully appreciate and absorb the lessons learnt from these conflicts and their application of technology, it is clear that the nature of warfare, and therefore the doctrine and the perspective of future war-fighters needs to change (Thompson, K.D., 2024).
Shifting Strategies
Within the Australian Defence Force (ADF), for example, there is an active move by the Government and Defence leadership to move towards a strategy that focuses on delivering capabilities in a rapid timeframe (Department of Defence, 2023). The delivery of such capabilities, regardless of the domain, often places a greater emphasis on the platforms rather than the elements which support them (Department of Defence, 2024). Of these supporting elements, training remains pivotal to ensuring the operator is best placed to utilise the capability to the greatest effect. This is particularly relevant for a joint or multi-faceted force (Australian Army, 2020). Early capability requirements may specify system designs as they pertain to function and performance, however it is crucial that training systems requirements are also developed in parallel to ensure that the training is driven by the strategic need, rather than a platform or system technical function (Australian Defence Force, 2021). This is supported by the firm awareness in the Defence capability acquisition profession, that training is a fundamental input to capability (FIC) (or doctrine, organization, training, materiel, leadership and education, personnel, and facilities (DOT-MLPF) in the US DoD). It therefore plays a complementary role in the success of a capability’s mission and strategic outcomes.
Training Agility
The requirement to deliver capabilities rapidly has been driven by what the Australian Defence establishment refers to as “a reduced warning time” to be able to react to a potential attack by an adversary, either directly on Australian soil or towards Australian interests in the region (Dibb, P. and Brabin-Smith, R., 2021). Capabilities and their associated systems therefore need to be delivered in a timely way that satisfies the minimum product in order to meet the strategic need. This concept is known as the Minimum Viable Capability (MVC) (Defence Acquisition University, 2024), and it refers to the “…rapid development and validation of only essential features for systems that are too large, too complex, or too critical…” (Binder, R.V, 2018) to be developed in full. In applying this essentially agile (Atlassian, 2024) thinking to our systems engineering processes, we also need to consider how we can concurrently ensure the delivery of suitable and relevant training systems to the operators utilising these MVCs.
Digital Mission Engineering
The capability development phase model (Department of Defence, 2022) presents a standard framework by which capabilities, be they weapons systems or support systems, can be delivered in a manner that meets the needs of the end-user, and delivers a desired effect in a safe manner. The model presented in Figure 1 provides a process which guides the organisation towards capability delivery, but doesn’t necessarily specify how this is implemented. For complex, multi-disciplinary platforms, such as weapons or vehicular technologies, a systems engineering (SE) approach can be used as a means to drive the implementation for the phases of Figure 1.
Figure 1. Capability Development Phases
Systems Engineering
Systems engineering is the “…transdisciplinary and integrative approach to enable the successful realisation, use, and retirement of engineered systems, using systems principles and concepts.” (INCOSE, 2024) In itself, it can be broken down into a range of phases that step through a ‘V-shape’ model (Figure 2), with the ultimate goal being the delivery of a system (complex or simple) that meets a need or purpose. Current methods of applying these SE principles utilise a very document-centric approach. These documents remain an electronic replica of a legacy mindset. As far as efficiencies and technological advantage, the digital documentation provides no significant benefit over the SE methods of the 1960s (Harvey et. al, 2012).
Figure 2. Systems Engineering V-Diagram (Wikipedia, 2025)
Digital Engineering
Digital engineering (DE) on the other hand, is a framework of methods and processes that allows the representation of key systems as data models, each of which encapsulates the characteristics of the system (Department of Infrastructure, 2016). DE within the context of SE, provides opportunities for the information generated through the aforementioned document-centric approach, to be better represented through a common source of truth. This allows for greater traceability across all stages of the SE process, and allows that data to be leveraged for other digital analytical tools, such as Computer Aided Design (CAD) or simulation. The nature of the DE representation is dependent on the perspective required by the designer (Figure 3), in this case the systems engineer, and how it may pertain to the phase of SE they are progressing.
Figure 3 – Model Perspectives (Shevchenko, 2021)
Application to Mission Engineering
If systems engineering provides the fundamental discipline by which capabilities can be conceived from need to delivery, then digital engineering enhances the discipline by providing a consistent, efficient, and traceable means of application. Mission Engineering (ME) on the other hand, can be seen as a valuable extension to the SE discipline, one that seeks to generate direct benefits for the operational context. In as much as it is an extension to SE, mission engineering can just as readily adopt the benefits that DE and its associated methods can provide.
Mission engineering is a “…process that helps (Defence) better understand and assess impacts to mission outcomes based on changes to systems, threats, operational concepts, environments, and mission architectures .” (Department of Defense, 2023). In other words, ME can be used to better understand how systems and capabilities can be best adapted to meet contemporary or rapidly changing operational scenarios. This is becoming more important as we move into a new generation of warfare. The adaptation of systems using the principles of ME, applies equally to training systems as much as any other system.
Figure 4 – Mission Engineering Process (Department of Defense, 2023)
The ME process (Figure 4) starts with two foundational steps, the first being the scoping of the mission purpose and problem area, and the second being the definition of mission context (i.e. the scenario, epoch, operational, and environmental conditions, etc.). The end-product of these initial steps is tantamount to telling a story around the mission and the environment into which we are trying to bring about mission success.
The next stage is mission architecture definition. Mission architecture starts with knowing the mission objective that was defined in the foundational stages, and then taking this goal and breaking it down into the actions/tasks that need to occur to fulfil that objective. This sequence is what is known as a mission thread (MT), and is essentially a ‘task sequence in a chain of events’, with no description of how or by whom the activity is to be accomplished. Further breaking down the mission threads, are mission engineering threads (MET), which subsequently provide the responsibilities necessary to accomplish the mission task, i.e. the actors, systems, technologies, organisations, and personnel. A breakdown of how the MT and MET are hierarchically laid out is provided in Figure 5. The MET also forms the basis by which validation T&E can be conducted in support of the mission measures; generally through an Operational Test & Evaluation (OT&E) process. This provides the necessary evidence to qualify the design of the system in the context of the mission it purports to achieve.
Figure 5 – Mission Engineering Hierarchy Relationship (Department of Defense, 2023)
Training System Development
Digital mission engineering can be leveraged to enable a training systems design process that is mission-focused. The process (Figure 6) commences with the appropriate Mission Engineering Thread (MET) and its activities. The data generated by this sequence can be used to inform the training need, thereby allowing a Training Needs Analysis (TNA) to be developed. These requirements, through a process of experiential analysis, will be transformed into system Training Attributes, specific to the training domain. The attributes are then parsed into a conceptualised system, defined by the values of the subject attributes. These attributes will form the metric by which a comparison analysis will be done to determine how training system Product Templates will match up to the needs attributes. A Product Alignment Analysis is therefore conducted to parse both the requirements and available products, through the attribute values, and therefore generate a solution for the original training needs. The solution may be in the form of a probability analysis which states what product template may be the best match for the mission training need, or it might present data, which a system designer may use to justify rationale for a pre-defined training system architecture. The following content will further detail how this overarching process is executed.
Figure 6 – Training Systems Development Steps
Building TNAs through METs
The development of any training system will start off with a set of requirements (Sorensen, H. and Benjamin, W., 1990). If the system is viewed from a purely functional and performance perspective, there may be greater focus on the technical design of the simulator or trainer, rather than purposeful thought around the ‘why’ of the system. The concept of the Training Needs Analysis (TNA) (Williams, 2023) should guide the thinking of how we might seed development of a training system that is directly related to training objectives.
The TNA needs to be contextualised to the mission objectives, and should therefore channel its data directly from the MET produced by a pre-considered ME analysis. The ME analysis is paramount to developing our understanding of the mission objectives, and thus how the mission parameters and throughput can be detailed. The MET is an end-product of this analysis, and provides valuable information that feeds our training needs, ensuring their relevance to the broader operational context. The MET will provide the TNA with an appreciation of the skills, systems, experience, and technologies required in order to train operatives for the desired mission scenario and environment (Goldenberg, M., 2022).
The process commences by representing the MET as a sequence of events that are allocated to systems, organisations, or resources. This will be represented through the Systems Modelling Language (SysML) format as a sequence diagram with allocated blocks representing the elements responsible for discrete activities (see Figure 7). The TNA, then takes the allotted terminology of the MET, and uses this information to form benchmark needs and requirements. For the example MET, the corresponding TNA requirements will pull key tasks from the MET that are specifically assigned to resources for the training systems end-user. The end product is a model that articulates the training needs and how they tie back to the mission sequence, as well as what entities are responsible for executing the discrete sequence stages. The TNA digital model will take the form of a SysML requirements diagram (Figure 8). By digitally modelling both of these and establishing a known relationship between them, the core of the training systems model can be seeded. It is also pertinent to note that these TNA requirements will form the initial basis by which validation can occur during the supportability T&E phase, whereby verification and validation of training systems will be conducted against the TNA elements.
Figure 7 – Example Mission Engineering Thread (MET) in SysML
Figure 8 – Example TNA ‘requirements’ being traced back to MET activities
With the TNA requirements defined, progression can subsequently be made towards the definition of the system design. The next step constitutes the generation of key training system attributes that can be extracted by assessing the TNA requirements model properties.
Training Attribute Generation
In the design of a training system, be it a simulator, procedural trainer, or other, consideration must be given to particular attributes that define the characteristics of the system (González N.V., 2013). These attributes are developed by the designer or system analyst, and will generally relate to the properties of training devices or systems. An example of possible attributes is presented in Table 1.
Table 1. Training System Attributes
| Description | |
| System modularity | This attribute quantifies the number of modules that make up the system, detailing the likely customisation options and multi-functional capabilities. |
| Networked capability | This attribute details the quantity of nodes within the system, ranging from standalone all the way through to fully networked systems. |
| Motor skills | This attribute lists the potential sensory inputs which may provide stimulus or be stimulated by the training, notably visual, aural, verbal, and or muscular |
| Decision-making (student) | To what degree does the student make decisions within the training system, as opposed to the system presenting information that allows them to make a decision or making the decision on their behalf |
| Decision-making (platform) | To what degree does the target training platform make decisions autonomously, versus awaiting feedback or input from the student to act |
| Role-play composition | The attribute which quantifies the number of roles within the training scenario (i.e. single operator versus multi-crew operations) |
| Student-machine teaming | The attribute which quantifies the ratio of student to machine within the operational environment and replicated in training |
| Content complexity | This attribute represents the time of experience that a student has prior to training, represented from ab initio all the way through to experienced |
| Simulator fidelity | The attribute that represents the fidelity of simulation, i.e. replication of physical, behavioural, software, or hardware elements of the represented capability |
| Operational context | The attribute that quantifies the operational level of the system, i.e. does it cater more towards fundamental training versus completely operational |
| System technology gap | The time attribute which represents the technology gap between the capability represented and the training technology |
| Facilitation | The attribute which measures to what level the training is self-guided versus instructor-led |
The key focus of this stage is building an understanding of how the TNA requirements model generates the training system attribute properties that trace back to the MET model objectives. Figure 9 follows through on the TNA requirements from Figure 8, and extends this to demonstrate the connections made to a sample of system attributes from one TNA requirement (“Target Search”). The model, to this point, has conceptually described the training system in accordance with our mission need. This stage attempts to build key properties of the training solution by generating the attributes that will influence the delivered product and define its character. For example, attributes that relate specifically to student/instructor experience in the required field of expertise, or crew composition, demonstrate that the training system needs to cater for a certain complexity of training, whilst also catering for multiple crew roles in training scenarios. The next stage of taking these attributes and aligning them with physical products or known designs is where the true benefit of this activity can translate the conceptual modelling to a tangible product.
Figure 9 – Example TNA ‘requirements’ being traced back to MET activities
Matching Attributes to Products
A tangible product may be an existential system, such as a ‘commercial off-the-shelf’ (COTS) simulator or training device, or a package that includes software and hardware systems in the form of a procedural trainer. Tangible products may be designed on the information provided by the system attributes model already discussed, or they may be COTS solutions which are selected to match the attributes model. Attribute matching may occur through the use of ‘digital product templates’, where the digital attribute representations of known systems will provide benchmarks by which assessments can be made against the training attributes. It is through the use of these Product Templates, that the final stages of the data assessments can be made, in order to establish likely design solutions, or inform decision-makers that there are no viable options.
The final step, herein referred to as a Product Alignment Analysis, provides the logic behind the training system solution generation, however this paper will not necessarily define how this analysis is executed. The most suitable mathematical approach would suggest the use of a method that statistically examines the delta between the attribute values of each product template versus the ‘mission’ training attributes. The product template with the least variation would therefore be selected as the likely training system candidate. Regardless of the detail, the outcome is clear – the framework provides a uniform means by which conceptually represented systems can be compared to a training requirements set, and therefore allow a design to be selected. It will be the physical implementation of this design into a hardware/software-based system that finds itself validated through an OT&E or similar program. This process will both inform on the success of the digital alignment process, but also provide objective evidence that substantiates the training solution.
Uncrewed System Anti-Access/Anti-Area Denial (A2/AD) Example
One area that is currently emerging as a new domain of warfare for both technology and operators, is the use of un-crewed systems in the maritime domain and their role in anti-access/anti-area denial (A2/AD) by an adversary. This mission scenario has been realised in both the Ukraine and Middle-Eastern (Red Sea) conflicts, where smaller State-based or issue-motivated groups have brought their more agile autonomous systems to bear on the larger surface vessels of major State-based actors (Dunley, R., 2024). Rapidly enhancing the skills and preparation of operators to combat these novel threats can be performed by designing training systems that enhance counter-drone warfare techniques and tactics. This example will be used to demonstrate how the aforementioned techniques can assist in the design of such training systems.
Develop the MET
The commonly known find, fix, target, track, engage, assess (F2T2EA) process, colloquially known as the ‘kill chain’ (United States Air Force, 2021), is used as the basis for a mission thread relating to the prosecution of an autonomous surface vessel target by friendly forces. In this scenario, it is the multi-domain and integrated platform coordination across both un-crewed and crewed friendly assets that generates the desired effect against hostile un-crewed surface vessels. The use of friendly force elements in executing the various stages of the ‘kill chain’ within a mission engineering thread context is displayed in Figure 12. The MET draws in the UAS, combat systems, and helicopter mission system actors, and articulates their responsibilities within the thread.
Figure 10 – A2/AD USV Mission Engineering Thread
Building TNA from the MET
In the next stage, a set of TNA requirements can be elicited from the actions of MET sequence, based on the behaviours and functions required to execute on each level of the F2T2EA process. Experiential knowledge and discernment of the systems designer will inform the requirements content and its traceability back to the MET. Figure 11 creates a high-level perspective of what training requirements may be generated through each ‘kill chain’ process step. It is important to note that these requirements may be further broken down to the sub-level, thereby increasing the fidelity of analysis, however this will not be covered in the scope of this case study. This high-level TNA forms the baseline skills expectation for an operator, required to conduct one or many of the F2T2EA ‘kill chain’ steps (e.g. to ‘Find’ (MET step) a contact, the operator needs to be trained to ‘Detect’ (TNA requirement) the contact.
Figure 11 – A2/AD USV MET to TNA Requirements
Using Training Attributes to select the Training Product
Using the database of training attributes pre-set by the system designers or their respective organisation (e.g. such as that developed in Table 1), the critical step of defining the training product characteristics is undertaken. Although generally, the expertise of the system designer would be drawn upon to manifest a collage of abstract attribute representations, for this example, the attributes already detailed in Table 1 will be utilised. After attributes are identified, their determined values are influenced by the TNA requirements generated in the stage previous. For instance, the TNA requirement Classify Contact, impacts the value for the attribute NetworkedCapability as the number of networked sensor nodes will enable classification information to be consolidated across disparate platforms. It also influences the DecisionMaking attribute, emphasising the need for enhanced decision speed in the classification process, particularly leveraging automated means to do so. In terms of accurately reflecting operational sensors and feedback, the TNA requirements Analyse Effects and Prosecute Contact, would need the attributes of ContentComplexity and SimulatorFidelity. Such attributes allow the representation of realistic and complex behaviours to ensure the operator can be trained to rapidly and accurately interpret representative data. This would not be appropriate if this was more of an ab initio system, however our TNA requirements help define the fact that this is not a simple theory-based learning tool, but rather an operational representation of a F2T2EA ‘kill chain’ mission scenario.
The aforementioned TNA to training attribute relationships are detailed in Figure 12, along with the remaining TNAs and their influence on attribute traceability and weighting. The functional representation of Figure 12 may be suitable to commence development of a training system functional baseline, but in order to define a product baseline, the use of further Product Alignment Analysis needs to occur, so that the attribute data can be collated with ‘digital template’ representations of known trainer designs, and comparisons conducted. Figure 13 demonstrates this comparison occurring between our attribute model, and known digital representations of procedural trainers and OEM simulators. The end-result of the analysis is a metric of plausibility to use a particular design to meet the training need of the MET. Given the heavy weighting placed on the attributes of networking, decision making, fidelity, and complexity, we would determine that the simulation device would need to be on a scale of at least a major combat systems simulator. The ‘procedural’ and ‘desktop’ trainers would likely not satisfy the scale required of the training attributes that have been developed. An ‘attribute parsing’ analytical technique can be used to determine the appropriate simulation device, however for this example, this analysis has been limited to an experiential assessment, rather than a mathematical comparison.
In concluding this example, it can be demonstrated that the functional architecture of the end-product training system is traceable to the mission objectives that established its precedence. These traceable and adaptable steps accentuate the agile nature of our un-crewed A2/AD training system. The traceability across the design, has provided speed for the designer in building a justifiable and suitable solution, whilst also allowing changes in mission requirements to be easily related to downstream design dependencies. The option of selecting product designs based on easily generated or available templates, both promotes the speed advantage and provides the designer with ‘commercial off-the-shelf’ assessment tools.
Figure 12 – A2/AD USV Requirements to Attribute Design (Product Design Characteristics)
Figure 13 – A2/AD USV Product Template Analysis
Discussions and Implications
The end-product of the methods in this research, notably the output of the product analysis, will produce a conceptualised system (described by attributes) which can be translated into a real-world system (based on the product templates that provide input to the analysis). The product templates can be adapted and expanded to include new technologies, new concepts, and new capabilities; so long as the attributes that define them, have a means of relating back to training requirements. In practice, therefore, this means the methods discussed align well with an ‘agile’ approach, and therefore would also suit practices that adopt this approach (e.g. Scrum or Kanban project management techniques). Referring to the earlier discussion on rationale for this research, agility was raised as a key advantage in the method. The DE approach that provides a single source of truth for data and design collaboration, as well as the inclusion of traceability between requirements and end-product, means that a design can be easily adapted and stakeholders regularly informed when requirements change. The traceability also means that the automatic development of test cases for OT&E phases, where validation across all FIC elements, including training, can be integrated with the broader system-of-systems digital model.
The application of this research has the potential to go just beyond the scope of training. Given the broader usage of the DE methods across multiple domains, notably aerospace, systems, and Defence, there is no reason why a weapons system or platform could not apply the principles of mission requirements traceability through to capability design in accordance with the research in this paper. The advantage that the method in this paper presents, relates to the strength of the mission engineering encapsulation, which draws the capability development directly back to the relevant operational need. The other advantage, centres on the functional elements of the ‘system design’, where the attribute endeavours to characterise behaviour and function, rather than hard- or soft- system properties. There still remains a strong potential for further research and investigation to fill the gap in the detail that surrounds the analysis of the method presented in this paper. This includes greater fidelity around the application of mathematical logic, or pursuing even more complex methods, such as the use of artificial intelligence to quickly parse larger ‘product template’ datasets with a multitude of attributes. The possibilities are endless, and draw back to the method’s potential for scalability.
Conclusions
The technique developed in this paper has attempted to demonstrate how the value of mission-based digital modelling can create more effective and targeted training systems that hone in on the outcomes desired for the student in the operational environment. The research has presented a traceable and justifiable approach that strives to deliver training systems that align with mission objectives, and in-turn, draw back to the capabilities that end-users will ultimately employ to execute these missions within operationally contested environments. The incorporation of validation methods, such as supportability testing of training as a FIC element, only bolster the suitability of the process and its respectively generated outcomes. The agility and adaptability of this method, and its alignment to the strategic need, demonstrates its growing importance in a global order that is becoming more uncertain and unpredictable by the day. The ultimate goal is to ensure that training does not become the bottleneck that restrains capability and end-user from being able to deliver an effect.
Acknowledgements
The author would like to thank Phil Swadling and Michael York for their on-going support and constructive feedback throughout every step of this research effort and journey. This paper would not be possible without their expertise and knowledge, particularly in the domain of training and simulation.
References
Atlassian (2024, May 23). What is the Agile methodology?, https://www.atlassian.com/agile
Australian Army (2020). Army Training Transformation Program Strategy – March 2020, Victoria Barracks: Sydney.
Australian Defence Force (2021). ADF Philosophical Doctrine – Learning, Canberra: Commonwealth of Australia.
Binder, R.V. (2018, Aug 06). Introducing the Minimum Viable Capability Strategy, https://insights.sei.cmu.edu/blog/introducing-the-minimum-viable-capability-strategy/
Defence Acquisition University. (2024, May 23). MVP, MVCR, and Deployment Frequency, https://aaf.dau.edu/aaf/software/mvp-mvcr/
Department of Defence (2022). Defence Capability Manual, Canberra: Commonwealth of Australia.
Department of Defence (2023). National Defence: Defence Strategic Review 2023, Canberra: Commonwealth of Australia.
Department of Defence (2024). National Defence Strategy 2024, Canberra: Commonwealth of Australia.
Department of Defense (2023). Department of Defense Mission Engineering Guide (Version 2.0), Washington, DC: Office of the Under Secretary of Defense for Research and Engineering.
Department of Infrastructure (2016). National Digital Engineering Policy Principles, Canberra: Commonwealth of Australia.
Dibb, P. and Brabin-Smith, R. (2021) Deterrence through denial: A strategy for an era of reduced warning time. ASPI Strategy, May 2021. https://www.aspi.org.au/report/deterrence-through-denial-strategy-era-reduced-warning-time
Dunley, R., (2024) Ukraine-style naval attack drones present challenges, but they are not revolutionary, 21 March 2024. https://www.aspistrategist.org.au/ukraine-style-naval-attack-drones-present-challenges-but-they-are-not-revolutionary/
Goldenberg, M. (2022). Mission Engineering Methodology: Studies and Analysis, Washington, DC: Office of the Under Secretary of Defense for Research and Engineering.
González N.V. (2013). Factors affecting simulator-training effectiveness. University of Jyväskylä, Jyväskylä, Finland.
Harvey, D., Waite, M., Logan, P., and Liddy, T. (2012) ‘Document the Model, Don’t Model the Document’, Systems Engineering/Test and Evaluation Conference and 6th Asia Pacific Conference on Systems Engineering. Brisbane, Australia. May 1-2.
INCOSE (2024, May 23). Systems Engineering, https://www.incose.org/about-systems-engineering/system-and-se-definitions/systems-engineering-definition
Shevchenko, N. (2020, Dec 22). An Introduction to Model-Based Systems Engineering (MBSE), https://insights.sei.cmu.edu/blog/introduction-model-based-systems-engineering-mbse/
Sokolski, H.D. (2012), The next arms race, Army War College (U.S.). Strategic Studies Institute, Carlisle PA.
Sorensen, H. and Benjamin, W., ‘Implementing Front-End Training Design Through the Instructional Systems Development Process’ SAE Technical Paper 901944, 1990, https://doi.org/10.4271/901944
Thompson, K.D., (2024) How the Drone War in Ukraine Is Transforming Conflict. Council on Foreign Relations, 16 January 2024. https://www.cfr.org/article/how-drone-war-ukraine-transforming-conflict
United States Air Force (2021). Air Force Doctrine Publication 3-60 – Targeting, Washington DC: USAF Headquarters.
Williams, T. (2023, May 10). A guide to conducting a training needs analysis, https://open.uts.edu.au/insights/corporate-training/a-guide-to-conducting-a-training-needs-analysis/
Author Biographies
Viruben Watson is a Senior Defence Test & Evaluation and Systems Engineer with over 15 years of software and aerospace experience and a proven track record for managing and trialing the integration of complex technical systems. He has proven experience across the Defence Aerospace sector, most recently as the Senior Systems Engineer within the Helicopter Aircrew Training System (HATS) simulation environment. Prior to this position, Viruben held senior positions within the Commonwealth as a consultant to Navy for the Destroyer Program, and served with distinction in the Royal Australian Navy as a Flight Test Engineer and Avionics Engineer for over 13 years. Throughout his career he has often lead initiatives and programs that required the use of his specialist digital engineering expertise, as well his strong systems engineering mindset. He has a Masters in Systems Engineering from UNSW Canberra, a Masters in Flight Test and Flight Dynamics from Cranfield University, and is also a Graduate of the Empire Test Pilots’ School in Boscombe Down.
Dewey Classification: L 681 12


