MARCH 2025 I Volume 46, Issue 1
MARCH 2025
Volume 46 I Issue 1
IN THIS JOURNAL:
- Issue at a Glance
- Chairman’s Message
Conversations with Experts
- Testing Without Being a Tester: A Conversation with Dr. Bill D'Amico
Technical Articles
- INNOVATION Independent Automated Verification and Validation Testbed for Test and Evaluation
- Defect Characterisation in ICT Scanned Energetic Materials Using Machine Learning
- SCOUT - Pushing High Performance Computing to the Data
- Real-Time Inference for Unmanned Ground Vehicles Using Lossy Compression and Deep Learning
- An AI Model Performance Benchmarking Harness for Reproducible Performance Evaluation
News
- Association News
- Chapter News
- Corporate Member News
![]()
Defect characterisation in ICT scanned energetic materials using machine learning
Lucy Green
QinetiQ, Cody Technology Park,
Farnborough, UK, GU14 0LS
Rebecca Jones
QinetiQ, Cody Technology Park,
Farnborough, UK, GU14 0LS
Alexander Milroy
QinetiQ, Cody Technology Park,
Farnborough, UK, GU14 0LS
Abstract
Solid-propellant rocket motors (SRMs) are commonly used in spaceflight and in the military for missile defence systems. Reliability and durability are critical traits for an SRM, however, both the ageing process and environmental exposure can lead to the formation of defects. Following non-destructive testing on the SRM, Industrial Computed Tomography (ICT) imagery is used to support manual defect analysis. This paper proposes a two-step machine learning solution for the automatic detection and characterisation of selected defects within an SRM using ICT imagery. From the output of the machine learning algorithm, an interactive three-dimensional visualisation of defects within an SRM is generated. Initial results reveal that a fine-tuned Faster R-CNN model was able to achieve an accuracy of 84% in identifying and detecting defects. Following this, binary thresholding was able to calculate exact defect area with 87% accuracy. Results indicate that a machine learning process can provide significant speed-up for SRM defect analysis, with processing completed within 30 minutes for an example SRM ICT scan volume. A digital map of the defects within an SRM will provide a framework to support simulation and predictive modelling regarding defect propagation throughout the lifetime of an SRM.
Keywords: Machine Learning, Computer Vision, Energetics , Solid Propellants (SPs), Solid Propellant Rocket Motor (SRM)
Introduction
Historically developed for military use, solid-propellant rocket motors (SRMs) have a range of applications, from spaceflight (Hendel 1965) to defence missile systems. Owing to their usage in such critical systems, reliability and durability are essential traits for an SRM. Failure modes of the SRM have been rigorously studied (McDonald 2010) (Sojourner 2015), with the structural integrity of the solid-propellant fill highlighted as a crucial issue (Martin 1972). Throughout the lifetime of an SRM, they are exposed to temperature fluctuations, different load conditions, and vibration and acceleration through transportation. These factors can cause stresses within the SRM and, alongside ageing, lead to a variety of deformities (Yildirim and Ozupek 2011). The defects encountered in SRMs are voids, cracks, uncured material, porosity in propellant grain and debonding (of various forms). If any of these defects become critical, they invariably lead to catastrophic case overheating and overpressures caused by burn surface abnormalities (Harris 1963).
SRMs regularly undergo non-destructive testing (NDT) in order to detect the various types and extents of anomalies in the inert load of the motor, and around its peripheral liner and casing (Harris 1963). Following testing, the SRM will be X-rayed, and/or scanned using an industrial computed tomography scanner (ICT). Although X-ray radiography is an efficient method to check for large defects (Remakanthan 2014), an industrial CT scanner can provide a high density resolution, making it suitable to detect the entire range of defects in an SRM, with additional spatial information (Fan and Tan 2013). Once the SRM has been ICT scanned, an energetics subject matter expert (SME) will analyse the entire ICT scan volume to look for defects in the SRM. This is done manually by naked eye, and is an extremely time consuming activity for the SME. A report published by the Hudson Institute in 2022 highlights the supply chain challenges posed by a dwindling energetics SME pool (Schadlow, et al. 2022). Thus, the requirement for automated solutions which create time-savings for energetics SMEs is becoming increasingly relevant.
In recent years, there has been rapid advancement in the use of machine learning and computer vision for quality control and defect detection in the manufacturing industry (Jha and Babiceanu 2023). Deep learning and convolutional neural network (CNN) based approaches have become standard in image object detection and classification tasks; this is due to their efficiency over traditional computer vision approaches (O’ Mahony, et al. 2019), coupled with their ability to achieve high accuracies (Krizhevsky, Sutskever and Hinton 2017). A popular approach for image segmentation tasks is the identification of a region of interest (ROI) containing the feature of interest, followed by a computer vision method to accurately characterise it. This approach is employed frequently for the analysis of medical imagery owing to the efficiency and accuracy of this technique, as demonstrated by (Zanddizari, et al. 2021) and (Moirangthem, et al. 2021), amongst other studies.
Detection of typical SRM defects (voids and cracks) using deep learning have been explored previously in a range of materials. Wei et al. demonstrated an automated approach for detecting and segmenting air voids in concrete (Wei, et al. 2021), while Machado et al have completed similar work for voids in composite laminates (Machado, et al. 2022). Over the past decade, several CNN based approaches have been proposed for crack detection in concrete (Ali, et al. 2022) (Silva and Lucena 2018) (Ma, Fan and Xie 2024). A study by Alnajjar et al. comparing popular CNN architectures for this task found that most CNNs achieved similar accuracy results over their dataset, with the custom CNN leading slightly when the dataset is scaled (Alnajjar, et al. 2021).
Recently, machine learning has been increasingly applied to the field of energetic systems for defect diagnosis. Liu and Sun applied a deep learning network over sensor outputs in order to estimate bore crack length and delamination (Liu and Sun 2021). Through the comparison of popular CNN architectures, Yan et al. show that a multi-task learning approach is superior for defect segmentation and classification in solid propellant (Yan, et al. 2022). More recently, Li et al. propose a lightweight, pixel-level segmentation method to detect common defects in X-ray images of SRMs (Li, et al. 2023). Their work shows that deep learning is well suited for the task of accurately characterising a range of defects in a radiographic image of an SRM.
In this study, a two-step approach for the detection and characterisation of a range of common SRM defects in ICT imagery is proposed. The architecture is built on a sequential structure. A CNN will classify and localise each defect in the imagery, after which a traditional computer vision approach will be applied to each region of interest (ROI) in order to segment and characterise the defect. The output of the machine learning model will be used to create an interactive, three-dimensional model of the defects present in an ICT scanned SRM.
Figure 1: Labelled diagram of a solid-propellant rocket motor (SRM).
Materials and Method
Data formulation
The dataset comprises entirely of image data, representing industrial CT scans of a range of energetic samples. Included are 16 samples of solid propellant from 2 separate mixes, machined into cuboids. Also included in the dataset was an ICT scan of an SRM, containing anomalies in the main propellant of the motor. Defects present in both the machined samples and SRM included voids, cracks and an additional class to remain unnamed (Class C), Figure 2. Owing to their frequency in the dataset, these are the defects which will be classified and quantified by the machine learning algorithm.

Figure 2: Examples of defects in propellant grain. Left-to-right: a) crack, b) well defined (large) voids, c) poorly defined (small) voids.
The entire dataset contains 11,974 images, with 80% used for training the model and the remaining 20% held back for validation. The 80:20 ratio provided a baalnce to maximise training data available, and minimise variance in performace testing. This split is widely acknowledged as optimum (Szegedy, et al. 2015). Images were split randomly between training and validation datasets. The validation dataset was verified to ensure it included data containing defects representitative of the entire sample. Raw images were 2048×2048 16-bit greyscale TIFF (Tag Image File Format), but were scaled down to 512×512 JPEGs for model training. Defects (voids, cracks and unnamed Class C) in images were manually labelled with bounding boxes as ground truth. Labelling was carried out by a non-SME team having had some prior training in defect detection by an SME. SME validation was provided on a small representative sample of the labelled dataset. Images were augmented by a flip along the x/y axis in order to create spatial invariance in the data, and by random manipulation of the greyscale intensity prior to conversion to JPEG for intensity invariance. The resultant dataset contained 78,247 examples of voids, 790 cracks and 973 Class C.
Deep learning for ROI detection
Convolutional neural networks (CNNs) are a type of machine learning primarily used for image classification and object recognition tasks. They are a branch of deep learning, containing multiple layers between the input and output layers. The CNN architecture takes inspiration from neural connections in the human brain; in a CNN the neurons are arranged in layers, where each node connects to another and has an associated weight and threshold. If the output of a node is above a specified threshold, the node is activated and data flows through to the next layer of the network.
CNNs are made up of a variety of layer types, each leverage the data differently to aid in the feature extraction pipeline. The convolution layer is essential for feature extraction, transforming the input image into a set of feature maps. Each feature map represents the presence and intensity of certain features at points in the image. Shallow layers identify basic features: edges and colours, deeper layers receive input from previous layers and detect more complex features. A series of non-linear transformations are conducted over the feature maps such that the network learns the relationships between the input and output. Finally, a fully-connected layer is responsible for classifying the image. Each feature map is considered to produce an output score for a class based on the features considered.
A wide range of convolutional neural network (CNN) architectures are in existence today; each architecture has its own strengths and weaknesses, and each are better suited to different classification problems. The PyTorch implementation of three CNNs was considered: RetinaNet (Lin, et al. 2017), VGG16 (Liu, et al. 2015) and Faster R-CNN (Ren, et al. 2015). These networks were compared, with training loss and accuracy metrics measured for each architecture, Table 1. Based on this, Faster R-CNN was chosen as the most suitable model for this task and was chosen as the CNN to perform the classification and ROI finding step, Figure 3. Metrics considered were Intersection over Union (IoU), mean Average Precision (mAP), inference speed (inference frames per second), and training loss area under curve.
The IoU measures the overlap between the model’s predicted bounding box and the ground truth bounding box, evaluating the model’s ability to precisely localise objects within an unseen image. As illustrated in Equation 3, it is the area of overlap of two boxes divided by its area of union. A higher IoU score indicates better alignment between model predictions and ground truth. In this study IoU has been calculated separately for each class.
Equation 3: Intersection over union equation.

The mAP takes the mean of all average precision scores to quantify how effective a model is at performing a given query; it is the harmonic of the sensitivity and the specificity of the model. It is found as follows, Equation 4:
Equation 4: Mean average precision equation.

Table 1: Comparison results of different bounding box classification networks. Mean Average Precision (mAP), Intersection over Union (IoU), speed (inference frames per second (FPS)) and area under the training loss curve (AUC) were compared.
| Method | mAP (%) | IoU (%) | Speed (FPS) |
Loss (AUC) |
| VGG16 | 64.85 | 63.45 | 3.63 | 180 |
| RetinaNet | 70.92 | 63.71 | 5.80 | 52 |
| Faster R-CNN | 81.24 | 72.37 | 4.35 | 40 |

Figure 3: Faster R-CNN for bounding box prediction and classification.
For the training cycle, model parameters are as follows: learning rate fixed at 0.0005, momentum fixed at 0.9, decay fixed at 0.0005. A learning rate scheduler was introduced and trialed for the training cycle, however on analysis of loss and efficiency, no significant improvement was observed. Thus, learning rate remains fixed. Momentum and decay were optimised through systematic permutation; values which achieved the lowest validation loss were selected. The loss function used in the model was the PyTorch implementation of Cross Entropy Loss (Good 1952). The model was trained with an early stopping function implemented to avoid overfitting to noise in the sample data. The early stopping function terminated model training once validation loss had increased 5 times concurrently. The model achieved optimum loss at ~80 epochs with a training batch size of 12.
The model was trained on a Windows 10 (64-bit) desktop PC with an 11th Gen Intel® Core™ i9-11900F @ 2.50Gz processor with 23GB of RAM installed. Model training and validation was undertaken on the machine’s GPU using the CUDA package (Nickolls, et al. 2008). The GPU used for training was an NVIDIA GeForce RTX 3060.
ROI segmentation
Traditional computer vision methods, using deterministic, rule-based algorithms, are used to analyse images and extract features to identify objects. In place of further deep learning, a post-processing step of thresholding is applied to the bounded ROI output allowing for further analysis of relevant features. They are segmented to accurately characterise the defects by identifying their contours and calculating properties such as area, major and minor axes, location and volume. For efficiency, the ROIs are treated as cropped images and passed through an iterative image processing algorithm developed using OpenCV (Bradski 2000).
A Gaussian blur with 5×5 kernel is applied over the ROI in order to attenuate noise and standardise intensity variations between pixels, facilitating more effective edge detection. Image thresholding is applied to segment the voids from the background; a technique that converts greyscale images into binary images based on a pixel intensity threshold. Otsu’s method (Otsu 1975) is an algorithm that automatically calculates this intensity threshold by finding the minimum intra-class variance, separating the pixels into two classes. Applying Otsu’s method over the ROI binarises the image, and, from this, the boundaries of the void are determined and returned as an array of contours. To support the creation of 3D visualisations, a mask is created for each frame, with each defect uniquely coded.
The contours generated can subsequently be used to obtain the cross-sectional area of the void in pixels. A conversion value to millimetres squared is given by a variable generated when the CT scan volume is created, Equation 1. Furthermore, the percentage of a given volume in the propellant grain that is made up of defect is a critical measure for analysts. It supports failure modelling, and is a key differentiator for the effects of aging on the propellant. It is computed using the pixel areas of the segmented defects, Equation 2.
Equation 1: Actual size of void.
cross sectional area (mm2) =pixel area*(2D pixel size (mm))2
Equation 2: Equation to calculate the percentage of volume made up by void.


Figure 4: Defect segmentation process.
Three-dimensional visualisation
On collation of the masks and other associated defect metrics for each ‘slice’ of the CT scan, a three-dimensional digital twin of the defects present in the SRM can be generated. The binary image volume masks of defects are converted into polygon meshes, comprising vertices and triangular faces, Figure 5. This is achieved with the marching cubes algorithm (Lorensen and Cline 1987), a popular algorithm for extracting the polygonal mesh of an isosurface from a three-dimensional scalar field.

Figure 5: Marching cubes algorithm is used to transform a series of binary masks to a polygon mesh
Once all the defects present in the scan volume have been mapped to a polygon mesh, they are collated to form a visualisation of all defects present in the volume, Figure 6. Three-dimensional graphics are visualised with the ‘PyVista’ Python package (Sullivan and Kaszynski 2019). Generated graphics contain interactive features. Thus far, these include object rotation and zoom, ability to select individual defects to view characteristics, and a slider to allow examination of cross-sectional slices in the volume. Users may also toggle on and off optional features, such as defect colour coding, and SRM component overlays.

Figure 6: Three-dimensional visualisation of the voids in an SRM. The visualisation is interactive, facilitating rotation and zoom. Voids are selectable, with a variety of metrics given on selection. This example includes values for surface area, volume, width, height depth and CT slice location. Additionally, voids can be colour coded based on characteristic, and SRM structural overlays can be toggled on or off.
Results
Classification accuracy
This section discusses the tests executed to evaluate the performance of the defect detection algorithm. Performance was evaluated for three defect classes, and compares ground truth bounding boxes to those predicted by the CNN. The accuracy metrics used to evaluate the performance of the CNN in this study are Intersection over Union (IoU) and mean Average Precision (mAP).
For the validation dataset, the CNN achieved a weighted average IoU score of 0.74, and a weighted mAP score of 0.84 across all three classes. Figure 7 illustrates the mAP and IoU scores for each class. Although the model performs best when detecting and classifying voids, it can be seen that there are ~78x the amount of void examples present in the training dataset as there are cracks. Thus, given a supplemented and more diverse dataset, it is proposed that non-void instances could potentially be detected with the same accuracy as voids.

Figure 7: Class accuracy for the validation dataset (IoU & mAP).
The confusion matrix, Figure 8, provides further insight into the model’s performance over the validation dataset. Although the dataset was heavily skewed toward voids, the number of void false positives is relatively small. This suggests that the skew did not significantly impact model classification. From Figure 8 it can be seen that there was misclassification of cracks as voids. This result is likely due to human ambiguity between a void and a crack in the labelling process; this is discussed further below, see Labelling agreeability.

Figure 8: Confusion matrix for validation dataset.
Defect segmentation
Following ROI detection and classification, defects are automatically segmented and quantified. For a number of validation frames, the area of propellant grain and area of defect were measured manually using an image analysis software. This was compared with areas estimated by the machine learning process. It was found that there was a difference of ~1% in measurement of propellant grain area, and a difference of ~13% in measurement of defect area. It can be seen that the propellant grain was easily separable, while defects were more challenging. This is likely due to the large variation in densities between propellant and conduit, and propellant, liner and casing. These are reflected as high contrast edges in the ICT imagery. In comparison, some smaller and emergent voids were not as contrasting, Figure 9. Additionally, smaller voids suffered from indistinguishability with noise in the imagery, leading to less accurate contour computation.

Figure 9: Segmentation results. Left: Low contrast voids. Right: High contrast voids.
Labeling agreeability
Labelling sufficient imagery to train a robust machine learning model is a time consuming and labour intensive process. Owing to the current shortage of energetics SMEs, their availability is limited. Thus, the labelling for this research was conducted by an non-SME team, with SME validation on a small subset of frames. Further to this, the labelling team consisted of four separate labelers.
As part of an investigation into the performance of the machine learning model, a study was conducted to investigate the agreeability within the labelling team, within SMEs, between the labelling team and SMEs. Thirty images, containing a representative variety of defects, were selected to be labelled by the team and two energetics SMEs. The images were labelled blind; labelers had no visibility of how each other had labelled the images. To account for the difference in drawn bounding box size, agreeability was calculated using the centre of mass of each box drawn. Percent agreeability for each defect between labelling groups is shown in Table 2.
Table 2: Agreeability (%) for each defect compared between labelling groups.
| Void | Crack | Class C | Weighted mean | |
| Team vs SMEs | 68.20 | 15.00 | 83.33 | 65.85 |
| Team vs Team | 72.53 | 19.44 | 83.33 | 70.79 |
| SME vs SME | 90.57 | 87.50 | 100 | 91.6 |
It can be seen that the SMEs were most agreeable between each other, with an average agreeability of ~91%. The labelling team was less agreeable with the SMEs, with an average agreeability of ~66%, although, this still represents a good agreeability score. The team had strong agreeability between each other, with average agreement of ~70%. All comparator groups were most agreeable in identifying Class C; this is likely due to its consistently high contrast with propellant grain in comparison to the other defects. Furthermore, all comparator groups were least agreeable when identifying cracks. This is likely due to the obscurity of hairline cracks, especially to the untrained eye, Figure 10a. Additionally, there is some ambiguity at the border of a defect being a crack or a void, Figure 10b.
Disagreement within the labelling team on the classification and existence of defects provides the machine learning model with conflicting feature information, and is likely to negatively impact model performance. This is evident in the model’s performance. It could be hypothesised that the model may handle cracks more successfully if label agreeability was improved. It can also be seen that while experts had very strong agreeability in their labelling, their agreement was not perfect. This highlights the subjectivity involved in analysing an SRM for defects, emphasising the need for an objective characterisation of defects.

Figure 10: Examples of poorly labelled defects. Left-to-right: a) Hairline cracks, often missed by the labelling team. b) A defect identified as both a void and a crack by the labelling team.
Analysis speed-up
In the real world application of the machine learning process proposed in this report, time taken to process each ICT scan volume is an essential metric to consider. Through reducing time taken to catalogue defects in a scan volume, major efficiency gains can be realised. Time taken to analyse each frame in the scan volume can be measured using two metrics: time taken to detect and classify a defect with a bounding box, then time taken to find the bounding box, segment and analyse the defect (compute area, location, major and minor axes). The analysis to compute time taken was undertaken on a sample with a larger than average number of defects, due to the additional manufactured defects planted within the propellant grain. The machine learning process averaged 0.23 seconds per image to classify and draw a bounding box, and 0.80 seconds per image for segmentation and quantification, Figure 11. Thus, a typical scan volume containing ~2,000 images could be analysed in ~26 minutes using the proposed machine learning tool. By comparison, it would take a human analyst a number of days to analyse the same scan volume.

Figure 11: Time taken for images to be analysed by the machine learning process
Conclusion
This paper demonstrates the potential of machine learning in enabling detection and characterisation of defects in energetic materials. It highlights the ability of machine learning to effectively automate the detection and characterisation process, with this model reducing the time taken from a human effort of days to a machine effort of minutes. The implementation this process enables energetics SMEs to focus on more valuable tasks, as the amount of images required for human analysis is reduced. This is particularly beneficial given the current shortage of energetics SMEs.
Implementing machine learning to identify, characterise and visualise defects present in an SRM has achieved very promising results; however, there are limitations to this study that should be considered. Foremost, the training data has a large imbalance with a skew towards voids. Although this may be representative to the proportions of defects within an SRM; a more diverse dataset could improve sensitivity towards non-void classes. Additionally, this implementation of Faster R-CNN does not maintain temporal information between slices. Due to the organic nature of the formation of cracks and voids, the addition of a memory component to the CNN may increase prediction robustness where serial frames include defects.
An additional limitation is the variation in ground truth labelling. The process of manually labelling the defects is subjective, and although there was very good agreeability between non-SME labelers, only good agreeability was achieved on average with the SMEs. Further time spent with SMEs could increase the labelling team’s knowledge in learning distinguishable features of defects, avoiding misclassifications in further labelling tasks. Additionally, ground truth labelling is a labour intensive task, thus a semi-supervised approach may be more suitable for future development.
A more dynamic approach to adjusting the pixel intensity threshold during binarisation holds the potential to improve the detectability and distinguishability of the smaller and emergent voids. This could improve the model’s performance when detecting more subtle variations in the contrast between the void and propellant grain. Furthermore, this paper does not address the critical debonding defect that can occur in SRMs. Future work in this area will incorporate detection of liner and casing debond into the machine learning process.
This work proposes a machine learning model to detect and visualise defects as a query-able three-dimensional volume. Analysis of the defects in relation to each other, and their location in the sample provide valuable information to SMEs regarding the structural integrity and characteristics of the energetic material. Furthermore, a digital model of the defects in an SRM offers a strong foundation to support future modelling and simulation analysis. This is particularly applicable in support of the quantification and predictive modelling of the effects of ageing, flight, and other exposures that may lead to defect formation in an SRM throughout its lifetime. In future research, it is expected that this machine learning method can be applied to defect detection to aid in the efficient analysis of varying types of solid propellant rocket motors.
Acknowledgements
The authors would like to acknowledge the energetics SMEs Ryan Phillips and Sarah Elliot at QinetiQ’s Environmental Test Centre for their specialist input into this project. The authors would also like to acknowledge QinetiQ’s materials SMEs for their insight into this project and the data science team at QinetiQ Farnborough for data labelling and code development. The surrogate rocket motor was manufactured by the University of Warwick.
References
Ali, R, J Chuah, M Talip, N Mokhtar, and M Shoaib. 2022. “Structural crack detection using deep convolutional neural networks.” Automation in construction.
Alnajjar, Fady, L Ali, Hamad Al Jassmi, Munkhjargal Gochoo, Wasif Khan, and M. Adel Serhani. 2021. “Performance evaluation of deep CNN-based crack detection and localization techniques for concrete structures.” Sensors 1-22.
Bradski, G. 2000. “The OpenCV Library.” Journal of Software Tools .
Fan, J, and F Tan. 2013. “Analysis of major defects and nondestructive testing methods for solid rocket motor.” AMM .
Good, IJ. 1952. “Rational Decisions .” Journal of Royal Statistical Society 107-114.
Harris, C. 1963. Development of nondestructive testing techniques for large solid-propellant rocket motors. United States Air Force .
Hendel, Frank J. 1965. “Review of Solid Propellants for Space Exploration.” National Aeronautics and Space Administration.
Jha, S, and R Babiceanu. 2023. “Deep CNN-based visual defect detection: Survey of current literature.” Computers in Industry.
Krizhevsky, Alex, Ilya Sutskever, and Geoffrey E. Hinton. 2017. “ImageNet classification with deep convolutional neural networks.” Communications of the ACM 84-90.
Li, L, J Ren, P Wang, H Gao, M Sun, B Sha, Z Lu, and X Li. 2023. “A pixel-level weak supervision segmentation method for typical defect images in X-ray inspection of solid rocket motors combustion chamber.” Measurement.
Lin, Tsung-Li, P Goyal, R Girshick, K He, and P Dollár. 2017. “Focal Loss for Dense Object Detection.” IEEE International Conference on Computer Vision (ICCV) 2999-3007.
Liu, D, and L Sun. 2021. “Defect Diagnosis in Solid Rocket Motors Using Sensors and Deep Learning Networks.” AIAA.
Liu, W, D Anguelov, D Erhan, C Szegedy, S Reed, C-Y Fu, and AC Berg. 2015. “SSD: Single Shot MultiBox Detector.” Computer Vision and Pattern Recognition.
Lorensen, William E., and Harvey E Cline. 1987. “Marching cubes: A high resolution 3D surface construction algorithm.” ACM SIGGRAPH Computer Graphics 163-169.
Ma, N, R Fan, and L Xie. 2024. “UP-CrackNet: Unsupervised Pixel-Wise Road Crack Detection via Adversarial Image Restoration.” IEEE Transactions on Intelligent Transportation Systems.
Machado, J, J Tavares, P Camanho, and N Correia. 2022. “Automatic void content assessment of composite laminates using a machine-learning approach.” Composite Structures.
Martin, Patrick J. 1972. Faliure Analysis of Solid Rocket Apogee Motors. Stanford Research Institute .
McDonald, Allan J. 2010. “Solid Rocket Motor Faliure.” ASCE.
Moirangthem, M, TR Singh, Singh, and Th T. 2021. “Image classification and retrieval framework for Brain Tumour Detection using CNN on ROI segmented MRI images.” IEEE.
Nickolls, J, I Buck, M Garland, and K Skadron. 2008. “Scalable Parallel Programming with CUDA.” Queue 40-53.
O’ Mahony, Niall, Sean Campbell, Anderson Carvalho, Suman Harapanahalli, Gustavo Velasco Hernandez, Lenka Krpalkova, and Daniel, Walsh, Joseph Riordan. 2019. “Deep Learning vs. Traditional Computer Vision.” Advances in Computer Vision.
Otsu, Nobuyuki. 1975. “A Threshold Selection Method for Gray-Level Histograms .” IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS 62-66.
Remakanthan, S. 2014. “Analysis of Defects In Solid Rocket Motors Using X-Ray Radiography.” NDE.
Ren, S, K He, R Girshick, and J Sun. 2015. “Faster R-CNN: towards real-time object detection with region proposal networks.” IEEE Transactions on Pattern Analysis and Machine Intelligence 1137-1149.
Schadlow, Nadia, B Helwig, B Clark, and TA Walton. 2022. Rocket’s Red Glare: Modernizing America’s Energetics Enterprise . Hudson Institute .
Silva, W, and D Lucena. 2018. “Concrete Cracks Detection Based on Deep Learning Image Classification.” MDPI.
Sojourner, Timothy. 2015. “Solid Rocket Motor Reliability and Historical Failure Modes Review.” AIAA.
Sullivan, B, and A Kaszynski. 2019. “PyVista: 3D plotting and mesh analysis through a streamlined interface for the Visualization Toolkit.” The Open Journal 1450.
Wei, Y, Z Wei, W Yao, C Wang, and Y Hony. 2021. “Automated detection and segmentation of concrete air voids using zero-angle light source and deep learning.” Automation in Construction.
Yan, J, J Li, B Luo, and C Zhang. 2022. “A Deep Detection Model based on Multi-task Learning for Appearance Defect of Solid Propellants.” Highlights in Science, Engineering and Technology .
Yildirim, HC, and S Ozupek. 2011. “Structural assessment of a solid propellant rocket motor: Effects of aging and damage.” Aerospace Science and Technology 635-641.
Zanddizari, Hadi, Nam Nguyen, Behnam Zeinali, and J. Morris Chang. 2021. “A new preprocessing approach to improve the performance of CNN-based skin lesion classification.” Medical and Biological Engineering and Computing.
Author Biographies
Christophe Harvey is a Data Science Team Lead at QinetiQ having joined the organisation in 2001. In 2012 he moved into the T&E domain and has been working with colleagues in the organisation to increase adoption of the use of ML and AI techniques to the processing and analysis of data derived from T&E activities.
Lucy Green is a Data Scientist at QinetiQ. She has an interest in image analysis and explainability of deep learning models. Her background is in Computer Science in which she obtained a Bachelor’s degree (BSc) from the University of Bath.
Rebecca Jones is a Data Scientist at QinetiQ. Her background is in Mathematics in which she obtained a Master’s degree (MMath) from Cardiff University. The main areas of her work are in computer vision and natural language processing.
Alexander Milroy is a Data Scientist. He holds a PhD from the University of Bristol (Department of Engineering Mathematics) and an MSc in Nuclear Physics (University of Liverpool). He currently works at QinetiQ but has previously worked at the ONS and UKHSA during the COVID-19 outbreak.
Dewey Classification: L 681 12


