DECEMBER 2025 I Volume 46, Issue 4
DECEMBER 2025
Volume 46 I Issue 4
IN THIS JOURNAL:
- Issue at a Glance
- Chairman’s Message
Technical Articles
- Resource Implications and Benefits of Model-Based Acquisition Planning
- Advancing DOD Test & Evaluation Through a System Profile
- Digital Representations in Acquisition Lifecycle Phases
- Predicting Cyber Attack Probability using Probabilistic Attack Trees
- Information Technology (IT) System Reliability and Availability Testing
- Blast Test Standard Adaptation for Hazard Assessment of Evolving Construction Techniques
- Modern Beyond Line of Sight T&E with Autonomous Systems
- Book Review of Verification, Validation, and Testing of Engineered Systems
News
- Association News
- Chapter News
- Corporate Member News
![]()
Predicting Cyber Attack Probability using Probabilistic Attack Trees

William D. Bryant Ph.D.
Modern Technology Solutions Inc. (MTSI), CISSP, C|EH, Security+
![]()
![]()
Introduction
Understanding how weapon systems and platforms will perform in cyber-contested environments is crucial for making rational programmatic and engineering decisions. If systems are not going to be effectively attacked from cyberspace, then little effort should go into cyber survivability and all resources should be focused on system performance or survivability in other threat domains. However, if systems are going to be effectively attacked from cyberspace, then we need a way to determine how best to spend our available resources to maximize the overall performance of the system to include an appropriate level of cyber survivability.
The appropriate level of cyber survivability should be determined by considering the mission risk posed by potential cyber attacks. Risk is defined by the Committee on National Security Systems (CNSS) Glossary as: “A measure of the extent to which an entity is threatened by a potential circumstance or event, and typically a function of: (i) the adverse impacts that would arise if the circumstance or event occurs; and (ii) the likelihood of occurrence.”1 Those adverse impacts and their probability are in turn typically seen as a combination of impact, vulnerability, and threat.2 Risk scoring can then be accomplished by combining or multiplying probability and how much mission capability is lost to an attack, and this value called Expected Mission Loss (EML).3 For example, if a system has a 10% chance of losing 50% mission capability, the EML is 5%.
Mission impact can be assessed by Subject Matter Experts (SMEs), who provide insights based on their domain expertise, or through modeling and simulation, which allows for testing scenarios and quantifying potential impacts in a controlled, repeatable manner.4 The probability of an adversary choosing to attempt an attack is a question best addressed by intelligence experts. The focus of this paper is on how to determine the probability that a particular cyber attack will be successful at creating a mission effect given an adversary has decided to attack using that approach, which is the equivalent of kinetic survivability’s probability of a weapon hitting the platform (PH).5
These probabilities are challenging to determine with acceptable accuracy, but fortunately, there are potential solutions to the challenge of usefully determining probability that can be pulled from research across a wide range of fields from psychology to engineering. In this paper I will propose that probabilistic attack trees scored using available data, simple linear models, and direct assessment provide a transparent and repeatable way to score the probability of success for a particular cyber attack and can provide more information than ordinal scores such as 1-5 or low, medium, and high.
Attack Trees
Attack trees were popularized in cybersecurity by Bruce Schneier in 19996 although the concept has roots in safety fault trees and numerous different variations in method and calculation have been utilized across a range of fields.7 An attack tree is a graphical hierarchical model that can be used to analyze the various ways an attacker can create a loss. The loss is the “root” at the top of the tree, and the various actions an attacker can take are “branches” and “leaves” as the trees are typically shown “upside down” with the root at the top. An example attack tree for a hypothetical Unmanned Aerial System (UAS) is in Figure 1.

Figure 1—Notional Attack Tree Example
Notice that the various nodes can be connected either “or” where only one branch must be true, or “and” where all branches must be true.
This type of attack tree provides a simple and intuitive way to combine many different possible attack pathways or loss scenarios into a single structure. This ability to logically connect attack pathways into an overall picture of risk focused on a potential loss can help to mitigate some of the challenges caused by a complex system’s large attack surface and many hundreds of potential attack paths.
Probabilistic Attack Trees
If the attack tree has probabilities assigned to the leaves at the bottom of the tree, the overall attack tree’s probability can be calculated to produce a probabilistic attack tree. Unfortunately, determining the probabilities of the initial leaves is perhaps the most difficult challenge of the entire process.
Historical and Design-Based Data
The first step is to leverage any reliable and relevant quantitative data. Douglas Hubbard famously maintains that you both “have more data than you think” and “need less than you think” to make meaningful quantitative measurements.8 Following our UAS example, consider that there is public data on the prevalence of insider threats available based on arrests and court cases. For example, one study done by the Defense Personnel and Security Research Center found 83 confirmed cases of insider information exfiltration from 1985-2017.9 With an estimate of the total military and civilian population by year (averaging from about 3.1M in 1985 to 2.1M in 2017) and an assumption that the threat is uniform, the total probability that any particular individual is an insider looking to exfiltrate information can be calculated at approximately 1:1,000,000. Of course, some will immediately argue that those are only the spies that were caught, what about all those that weren’t caught? While reliable public data is not available to give the ratio of caught to uncaught spies, if we assume a worst-case scenario that 9 out of 10 spies get away with it and are never caught, we still have a very low probability of a particular individual insider of 1:100,000. We can then apply this data to several of the leaves, for example, the probability of an operator insider depends on the number of operators who have access to the system and the probability that one of them is an insider threat. If we assume 200 operators across the system, the overall probability becomes 0.002.
For the probability that an adversary gets an insider to give them the encryption key we can use the same probability of an individual insider, but now there is a smaller population of potential insiders who have access to encryption keys. If we assume 50 people, the probability of an insider giving an adversary the key is 0.0005. These types of simple calculations using historical data and system information (e.g. how many operators there are) can be accomplished for many different types of leaves in attack trees and are also re-usable as often the same historical data can be applied across multiple attack trees.
As another example of a data driven probability consider the probability that an adversary can get the communications link to shift into an unencrypted mode. Our notional system has been designed to only accept encrypted commands and if it loses its ability to receive encrypted commands it defaults to a lost communications profile and returns to base. Thus, we have assigned a probability of 0.0 to this leaf. Note that we did not remove the leaf so that it is still considered for testing, and an argument can be made that a low probability should be put there instead of zero in case an adversary can find some clever way of getting around the design. However, for this simple example, we set the probability to 0.0.
The attack tree with these data driven scores is below in Figure 2.

Figure 2—Notional Attack Tree with Data Driven Scoring
The bottom leaves that need scoring are outlined in green and we have a number of additional elements that still need scoring.
Simple Linear Models
There is a large body of research showing that simple models, even without weighting, generally outperform human-based direct assessment.10 These simple linear models can be built using several different methods including historical data, human assessment, or even artificial intelligence trained on large sets of data. Historical data is of course greatly preferred, but often will not be available, so the next best option is often human informed models.
Human informed models rely on human-provided inputs instead of directly measured objective data, but still provide probabilities based on a mathematical model derived from human assessment instead of a direct measurement from the humans. It may seem odd that a model based solely on human expert input could be more accurate than direct assessment by those same experts, but that is exactly what studies show.11
Human informed models trace their lineage back to Egon Brunswick in the 1950s who postulated that human decision makers utilize a discrete series of factors or “cues” that can be pulled out and considered separately to understand their decision making in a lens model.12
We can build targeted human informed models that apply to specific leaves on the attack tree. One example is below in Figure 3.

Figure 3—Example Atomic Component Model
This is called an atomic level model as it does not need to be broken down further and, in this case, represents an attacker who has privileged access to a system component, which is connected through some communication medium to a component that is going to be attacked.
For each of these models criteria must be developed, and in this case it was broken into three main areas, attacker goal13, component cyber robustness, and Secure Systems Engineering (SSE) process maturity with five, three, and three options respectively. It is very important when constructing these models that the number of factors and options be kept as low as possible while maintaining the key factors as otherwise the models become extremely difficult to build and maintain. In this particular case, building the model required creating a survey given to cyber test experts that had 45 questions. An earlier version of the model with more factors required 720 questions, which was impractical. An example of one of the surveys, which randomized the order of the questions is below in Figure 4.

Figure 4—Example Atomic Component Model Survey
The survey asks experts to provide the probability of successful attack given every combination of the factors with a given set of assumptions and clear definitions for all of the terms. The challenges of using human assessors to estimate probabilities still apply, and are discussed more fully below, but a significant body of research shows this approach will outperform direct human assessment.14 With the survey results a multiple linear regression analysis can be accomplished which provides the coefficients for each of the terms with the results shown in Figure 5.

Figure 5—Component Model Linear Regression Results
With this model, any component, with any combination of the criteria can be assigned a probability based on the model. Additionally, simple statistical tests can be run to see if the selected factors sufficiently predict the outcome and if the results are statistically significant.
In the UAS example, the atomic model above applies to one of the leaves so the probability of 0.30 was filled in.

Figure 6—Notional Attack Tree with Human Informed Model Scoring
In this particular example, the adversary was attempting to write malicious data to a component with average component cyber robustness that was developed using a compliance focused SSE process. With those inputs the probability of an adversary launching a successful attack given that they already have control of another device on the 1553 bus is 0.30 from the model. The probability of an adversary being on another component via a supply chain attack is estimated in the next step of direct SME assessments.
Direct Assessment
If there is no available applicable historical or design data, and no models built to predict the value for a particular leaf, the last option is to use direct human assessment. There are several known problems with utilizing human SMEs to determine probabilities; one is that they tend to be overconfident in their assessment abilities although that can be addressed with calibration training.15 A second issue is that typically SME assessment are given as point values with no way to assess uncertainty, although there are different methods of uncertainty measurement such as 90% confidence intervals that can be used to address this concern. Another major known issue is that humans tend to be inconsistent and will provide different results for the same problem at different times.16
Despite these challenges, there will be times when SME judgement is the most appropriate way to determine a probability. In our example, six values were filled in using SME assessments in Figure 7.

Figure 7—Notional Attack Tree with SME Assessment Scoring
It is worth noting that even SME-based probabilistic attack tree scoring has some significant advantages over typical SME scoring where the entire attack is scored on a 1-5 scale for probability. In that case, the scoring is very broad and discussions between “3” and “4”, or “medium” and “high” do not have much specificity.17 In this attack tree example, the SME has stated that there is a 7% chance that an adversary could gain control of the ground station via a traditional-IT cyber attack over SIPR. This is a very specific prediction that can be discussed, argued about, and tested versus an overall risk of “medium” or “high”.
Probabilistic Attack Tree Calculations
Once the bottom leaves all have probabilities assigned, calculating up to the root of the tree is a simple matter with the example results shown in Figure 8.

Figure 8—Notional Attack Tree with Calculated Probabilities
The calculated values that are “and” connections simply require multiplying the various branches, for example, the probability of an adversary sending a spoofed navigation message on the 1553 bus in the middle of the attack tree requires both the adversary being in control of a device and successfully sending a message so that probability is 0.01 x 0.45 = .0045. Assuming independence of the terms “or” probabilities are calculated by:

So, the probability of an adversary gaining control of the ground station on the left side of the attack tree is calculated by:
![]()
Look for Opportunities to Improve Scoring
Even with a complete set of calculations in the attack tree and an overall probability of successful attack, there may be opportunities to improve upon the initial probability measurement. The first step is to look at the elements scored with human informed models or SME assessment and see if there may be historical or system level data to help inform those scores. Second, analysts can look at the SME assessed leaves and look for additional opportunities to build human informed models instead of relying on direct SME assessment. Finally, nodes that have high probabilities or uncertainty may be areas to expend scarce test resources to either validate, or update scores. Some of the nodes that could be considered for improvement in the example are below in Figure 9.

Figure 9—Potential Opportunities to Improve the Notional Attack Tree
Look for Opportunities to Improve the System
With the ability to understand, not just the overall probability of a particular cyber attack, but the probability of the various pathways to that attack, Return on Investment (ROI) for various potential mitigation options can be calculated as an aid to better decision making. In the example above, two of the highest probabilities contributing to various attack paths involve adversaries getting on the system’s ground station through a traditional-IT cyber attack, so this may be a high-priority area to examine. Different potential mitigation options including hardening, active cyber defense, or architectural changes could be considered and the overall risk to the system recalculated for each proposed mitigation. If resources are limited, as they always are, then a mitigation can be selected that is within budget and provides the greatest reduction in risk for the available resources.
Sensitivity Analysis
One way to identify which leaves to focus on as most significant to overall probability of adversary success is through a sensitivity analysis. Elasticity, or the amount the root value changes for a change in a leaf score can show which leaf nodes should likely be focused on. The equation to calculate elasticity is:

This gives an idea of how much the probability of the overall risk changes with a change in the probability of an individual leaf node.

Figure 10—Notional Attack Tree Sensitivity Analysis with Elasticity
In this notional example, looking at hardening the ground station to attacks over SIPR seems like the area with the greatest potential payoff.
Uncertainty
Up to this point, I have been utilizing simple point probability measurements such as 7% in the example to make it as simple as possible. This might seem to imply that I think that when a human assessor says 7%, that means the correct answer is exactly 7%, not 6% and not 8%. That is clearly not the case as the research shows that humans are generally quite bad at assessing probabilities and tend to be overconfident, inconsistent, and subject to a number of psychological biases.18 My claim instead is that 7% is more useful than “low” because it both contains more information by providing a ratio score of how “low” and enables the use of quantitative tools such as probabilistic attack trees to combine risks as well as simulations to determine mission impact.
Unfortunately, that still leaves the issue of the large uncertainty volume around human generated probabilities. Ideally, that uncertainty should be captured in either a qualitative or quantitative way. A simple way to capture that uncertainty in a qualitative way is to ask the assessors to provide a “confidence level” to accompany each input scored from 0-100%. Those confidence scores can then be averaged across assessors and through the attack tree to highlight risks that appear to have lower, or higher, confidence.
Another method, and one that I typically prefer, is the use of 90% confidence intervals where an assessor provides the range within which they think there is a 90% chance that the true value lies as the input instead of a point value. Thus, instead or providing an input of 7%, they would provide something like 1-15%. The width of this confidence interval provides information on how much uncertainty the assessor is aware of in the measurement. While untrained human assessors tend to be overconfident, where their ranges are significantly too small, calibration training can bring about 85% of people to a point where their 90% probability confidence intervals are right about 90% of the time.19
Using 90% confidence intervals does complicate calculation in an attack tree as Monte Carlo simulations across the probability distribution are needed instead of simple mathematical functions, but fortunately tools like R or even Excel can easily accomplish these calculations. The same notional attack tree example using 90% confidence intervals is shown in Figure 11.

Figure 11—Notional Attack Tree with 90% Confidence Intervals
Validation
The theory behind probabilistic attack trees might be sound, but that is less relevant to DoD stakeholders than the more pragmatic question of do they lead to better decisions than other competing methods such as 5×5 ordinal risk matrices? The decisions we are looking to influence are generally programmatic and operational ones that either result in a system being built or used differently.
Probabilistic attack trees do have several advantages over more common methods, to include that they force the focus of risk assessment onto the analysis of mission risk and mission loss versus the more common focus on vulnerabilities, and that their quantitative nature makes it possible to validate whether their predictions are congruent with test results. A broader discussion of validating quantitative cyber risk analysis using test can be found in the December 2022 ITEA Journal article on “Measuring the Measurers: Using Test to Validate Cyber Risk Assessments”.20
Essentially, if a quantitative cyber risk assessment using probabilistic attack trees is completed for a system before it is tested, it will provide a concrete probability, or range of expected probabilities on whether a particular attack will be able to be accomplished or not. The cyber test then provides a set of data on what was successful and what was not that can be compared to the predictions. A single prediction and test result will not be enough, but when multiple predictions and test points are compared, it should be possible to use that as evidence of the plausibility, or lack of plausibility of the predictions. This is possible with probabilistic attack trees provided they are explicitly quantitative; if the probability of a successful attack is determined to be “moderate”, nothing useful can be said about the prediction’s accuracy, regardless of whether the actual test was successful or not.
Conclusions
Determining a usefully accurate measurement for the probability that an adversary will be able to successfully launch a particular cyber attack continues to be a challenging problem for assessing cyber survivability. Probabilistic attack trees with probability values derived from available historical data, simple linear models, and direct assessment provide a solution that also works well within modern model-based systems engineering environments.
This manuscript introduced a quantitative method for employing attack trees that are anchored in quantified probability values. Since these probabilities are approximated with varying degrees of accuracy, uncertainty quantification and model validation would serve to characterize the accuracy of this approach for a particular system. Future research should focus on empirical validation of the approach on real complex systems to test how well the method can be implemented on that scale. Additional case studies will involve finding applicable historical data and building more human informed models, which can be re-used and can potentially help with scaling the approach to more complex systems with more attack trees.
Adoption of probabilistic attack trees by a program will not be overly complex, but will involve finding appropriate historical data, building simple linear models, and creating the initial set of probabilistic attack trees. Once those initial steps have been completed, the reusability of the components and models will make further iterations on the design and cyber survivability measurements both faster and easier. When this improved measurement of cyber attack probability is coupled with a meaningful measurement of mission impact whether derived from mission experts or modeling and simulation a true understanding of cyber survivability risk emerges that can be used to inform mitigation design, test planning, and resource allocation.
Probabilistic attack trees have the potential to become a cornerstone of modern cyber risk assessment with quantitative results that are transparent, repeatable, and easily understood enabling defense programs and operators to make better decisions. This approach offers a reliable and scalable way to safeguard mission-critical platforms and weapon systems enabling them to continue to function as intended despite whatever an adversary may throw at them.
References
1Committee on National Security Systems. (2015). Committee on National Security Systems (CNSS) Glossary: CNSSI No. 4009. Committee on National Security Systems, Washington DC, p 169.
2 Andrew Brown, Bill “Data” Bryant, Erik Moro, and Matt Standard. (2023). The Unified Risk Assessment and Measurement System (URAMS) Guidebook, edited by Dr. Bill “Data” Bryant, Version 3.0. www.mtsi-va.com/weapon-systems-cybersecurity/, p 4.
3 Andrew Brown, Bill “Data” Bryant, Erik Moro, and Matt Standard. (2023). The Unified Risk Assessment and Measurement System (URAMS) Guidebook, edited by Dr. Bill “Data” Bryant, Version 3.0. www.mtsi-va.com/weapon-systems-cybersecurity/, p 55.
4 Bryant, William “Data.” (2025). Using M&S to Determine Cyber Survivability: Score Small and Let the Machines Do the Math. Aircraft Survivability Journal, Spring 2025. https://jasp-online.org.
5 R E. Ball, The Fundamentals of Aircraft Combat Survivability Analysis and Design, 2nd ed. Reston, VA: American Institute of Aeronautics and Astronautics, 2003, pp 10-20.
6 Bruce Schneier. (1999). Attack Trees: Modeling Security Threats. Dr. Dobb’s Journal, December 1999, https://www.schneier.com/academic/archives/1999/12/attack_trees.html.
7 Barbara Kordy, Ludovic Piètre-Cambacédès, Patrick Schweitzer, DAG-based attack and defense modeling: Don’t miss the forest for the attack trees, Computer Science Review, Volumes 13–14, 2014, pp 1-38, ISSN 1574-0137, https://doi.org/10.1016/j.cosrev.2014.07.001.
8 Douglas W. Hubbard and Richard Seiersen. (2023). How to Measure Anything in Cybersecurity Risk, 2nd ed., Wiley, Hoboken, NJ, 2023), p 101.
9 Stephanie L. Jaros, Katlin J. Rhyner, Shannen M. McGrath, and Erik R. Gregory. (2019). The Resource Exfiltration Project: Findings from DoD Cases, 1985-2017. Defense Personnel and Security Research Center, Office of People Analytics, March 2019.
10 Robyn M. Dawes. (1979). The Robust Beauty of Improper Linear Models in Decision Making. American Psychologist, Vol. 34, No7, July 1979, pp 571-582.
11 Douglas W. Hubbard and Richard Seiersen. (2023). How to Measure Anything in Cybersecurity Risk, 2nd ed., Wiley, Hoboken, NJ, 2023), p 185.
12 Brunswick, Egon. (1956). Perception and the Representative Design of Psychological Experiments. University of California Press, Berkeley, CA, 1956.
13 This list of five attacker goals was developed by the Johns Hopkins University Applied Physics Laboratory (JHU/APL) as part of their ongoing High Adversary Tier Threat Response Interdicting Cyberspace Kill-chain (HAT TRICK) work, see U.S. Department of Defense, Office of the Director, Developmental Test, Evaluation, and Assessments, Office of the Under Secretary of Defense for Research and Engineering. Cyber DTE Guidebook V3. June 2025. pp. 125–127.
14 Douglas W. Hubbard and Richard Seiersen. (2023). How to Measure Anything in Cybersecurity Risk, 2nd ed., Wiley, Hoboken, NJ, 2023), p 185.
15 Douglas W. Hubbard. (2014). How to Measure Anything: Finding the Value of Intangibles in Business, 3rd ed. Wiley, Hoboken, NJ, 2014), p 98.
16 Douglas W. Hubbard and Richard Seiersen. (2016). How to Measure Anything in Cybersecurity Risk, Wiley, Hoboken, NJ, 2016), p 70.
17 Hubbard and D. Evans, “Problems with Scoring Methods and Ordinal Scales in Risk Assessment,” IBM Journal of Research and Development54, no. 3 (May/June 2010): pp 2:4.
18 For an overview of psychological biases see Daniel Kahneman. (2011). Thinking Fast and Slow (Farrar, Straus, and Giroux, New York, NY, 2011).
19 Douglas W. Hubbard. (2014). How to Measure Anything: Finding the Value of Intangibles in Business, 3rd ed. Wiley, Hoboken, NJ, 2014), p 113.
20 Bryant, William. D. “Data” (2022). Measuring the Measurers: Using test to validate cyber risk assessments. The ITEA Journal of Test and Evaluation, 43(4), pp 224-230.
Author Biographies
Dr. Bill “Data” Bryant is a cyberspace defense and risk leader with a diverse background in operations, engineering, planning, and strategy. His research interests include quantitative and probabilistic risk assessment, cyber survivability, full-spectrum survivability, and utilizing MBSE and M&S in secure systems engineering.
In his current role at Modern Technology Solutions Incorporated, Dr. Bryant created the Unified Risk Assessment and Measurement Process (URAMS). With a focus on assessing the cyber risk to aviation platforms and weapon systems, Dr. Bryant has supported numerous strategic and operational efforts for cyber resiliency, survivability of weapon systems, and cybersecurity risk assessments on various critical cyber-physical systems across multiple agencies. Dr. Bryant also co-developed Aircraft Cyber Combat Survivability (ACCS) with Dr. Bob Ball and has been working to apply kinetic survivability concepts to the new realm of cyber weapons.
With over 25 years in the Air Force—including serving as the Deputy Chief Information Security Officer—Dr. Bryant has extensive experience successfully implementing proposals and policies to improve the cyber defense of weapon systems. He holds a wide range of academic degrees, in addition to his PhD, including Aeronautical Engineering, Space Systems, Military Strategy, and Organizational Management. He also holds CISSP, C|EH, and Security+ certifications.
Dewey Classification: L 681 12

