JUNE 2025 I Volume 46, Issue 2

Diagram illustrating the process of Adaptive Algorithms for LIDAR Semantic Segmentation on Edge Devices

Adaptive Algorithms for LIDAR Semantic Segmentation on Edge Devices

Billy Geerhart III

Billy Geerhart III

DEVCOM US Army Research Laboratory
Aberdeen Proving Ground, United States

Shane Muller

Shane Muller

DEVCOM US Army Research Laboratory
Aberdeen Proving Ground, United States

Venkateswara Dasari

Venkateswara Dasari

DEVCOM US Army Research Laboratory
Aberdeen Proving Ground, United States

David Alexander

David Alexander

DEVCOM US Army Research Laboratory
Aberdeen Proving Ground, United States

Peng Wang

Peng Wang

DEVCOM US Army Research Laboratory
Aberdeen Proving Ground, United States

DOI: 10.61278/itea.46.2.1006

Abstract

The Label-Diffusion-LIDAR-Segmentation (LDLS) algorithm augments image-based LIDAR segmentation by mapping the segmentation of a 2D RGB image onto LIDAR points. The use of matrix calculations during the mapping process reduces noise, allowing the algorithm to reliably inference on environmental categories. Our previous work optimized the algorithm to achieve a 3x speedup, enabling the use of LDLS for real-time inference of LIDAR and image data. Our current work aims to optimize the LDLS for dynamic execution environments by implementing adaptive algorithms. The adaptive algorithms were built by creating three new hyperparameters – image size, LIDAR resolution, and k-nearest-neighbors (KNN) – from the existing hard-coded network structure. These parameters can be dynamically customized to reduce the computational load while minimizing the impact on algorithm accuracy. In this work we will deploy the adaptive LDLS alongside the original LDLS while measuring the change in performance due to modifications of the newly-added hyperparameters.

Keywords: LIDAR segmentation, hyperparameter, optimization, adaptive algorithms

1. Introduction

Computer algorithms based on deep-neural-networks(DNNs) are widely used for navigation tasks deployed on autonomous vehicles. These algorithms can be computationally complex, which poses a problem for edge devices that are often constrained by size, weight, and power (SWaP) requirements. To run multiple DNNs at the same time, optimizations are needed which reduce the computational complexity of the algorithms while attempting to maintain sufficient accuracy for the tasks. Given this, the goal of our research is to increase the functionality of algorithms deployed on autonomous robots while also augmenting those algorithms to increase their reliability.

Our past work on increasing the functionality of robotic platforms included optimization of DNNs via pruning, 1, 2 quantization,3 and knowledge distillation. Our most successful optimization efforts focused on quantization by building a framework around Tensor-RT. The framework was built to circumvent the limitation of Tensor-RT by grouping and quantizing as many supported layers as possible while ignoring unsupported layers. Before the framework was built, we had to hand-optimize the algorithms by first benchmarking to find the most compute-intense operations, and then quantize those operations using Tensor-RT. The benchmarking resulted in the greatest reduction in compute cost for a given amount of work hours, but we found the work tedious to such an extent that we built a framework to perform the same optimization. Optimizing a single operation by hand took around a day’s worth of work as it requires benchmarking, quantization, and then validation; all this work had to be done manually by modifying source code directly. With the framework the same optimization of a single operation would only take a couple of minutes, and this optimization would leave the source code on disk untouched as it would modify the code stored in RAM. We have applied the framework to a DNN that contained as many as 590 operations that would have taken a human well over a month to manually optimize. By comparison, the resulting framework took just a single night to quantize after which the quantized algorithm could be loaded in less than a minute.

Our past work on increasing the reliability of robotic platforms included deploying the Label-Diffusion- LIDAR-Segmentation(LDLS) algorithm. LDLS attempts LIDAR segmentation via mapping the instance segmentation algorithm – in this case Mask-RCNN – to a point cloud, and also attempts noise reduction via label diffusion. The label diffusion process included both a pixel-to-point interaction as well as a point-to-point interaction that was represented by an iterative matrix multiplication. Initially the pixel-to-point interaction allowed labels to flow from the pixels into the LIDAR point cloud, but eventually the point-to-point interaction would dominate over the pixel-to-point interaction. Although LDLS contains both the pixel-to-point and point-to-point interaction to create a LIDAR segmentation, it turns out that any LIDAR segmentation algorithm can replace the pixel-to-point interaction while label diffusion from LDLS can still be used for the point-to-point interaction. The reason for retaining the label diffusion is to reduce the high frequency noise in the original LIDAR segmentation; this is useful as we have found in practice LIDAR segmentations will generally produce at least one mislabeled point that is considered impassable. By using label diffusion we can smooth out those mislabeled points to make the robot more stable when choosing a path to travel.

Distribution Statement A. Approved for public release: unlimited distribution

Our current work on increasing the functionality and reliability of robots includes optimizing LDLS using the Virtuoso method. Virtuoso is an object detection and tracking algorithm that introduced hyper parameters that can be adjusted based on accuracy or latency requirements. For every set of hyperparameters there is a corresponding latency and accuracy, and one would generally choose a particular latency requirement as a filter and then select the configuration to give the best accuracy. The mapping between hyperparameters to accuracy and latency are usually gathered before deployment through a benchmark for every configuration, but the data could be scaled depending on the situation. For example, during high contention scenarios the expected latency could be scaled depending on the actual latency of the algorithm chosen, and when deployed on different hardware than the original benchmark the expected latency from the original dataset can be scaled until new data is gathered using the new hardware. Following the Virtuoso method, we want to add hyper parameters to the deployment of LDLS to allow for deployment in either high contention scenarios or deployment on heterogeneous hardware while maintaining a reasonable latency requirement for every platform.

2. RELATED WORK

In machine learning, a key distinction exists between parameter optimization and hyperparameter optimization. Parameter optimization refers to adjusting internal model parameters, such as the weights of a neural network, during training via methods like stochastic gradient descent (SGD)4 or Adam.5 In contrast, hyperparameter optimization concerns the external settings that govern the learning process itself, including learning rates, regularization strengths, and architectural choices. Unlike model parameters, hyperparameters are not learned during training and must instead be selected through a search over possible configurations, often without access to gradient information. This makes hyperparameter optimization inherently more challenging, particularly when the search space is high-dimensional, noisy, or computationally expensive to evaluate. Traditional approaches such as grid search6 and random search provide simple baselines but are generally inefficient for complex models and large datasets.

More sophisticated strategies have been developed to overcome the limitations of exhaustive search methods. Bayesian optimization techniques7-9 model the objective function probabilistically, enabling more efficient exploration of the hyperparameter space by focusing evaluations on regions likely to yield improvements. Evolutionary algorithms10 offer an alternative, simulating natural selection processes to evolve better hyperparameter configurations over successive generations without relying on gradient information. In addition, recent work has extended hyperparameter optimization to multi-objective settings, where trade-offs between competing performance metrics must be carefully balanced.11, 12 These advanced methods are particularly valuable when optimizing complex models under constraints of computational cost and noise, challenges that motivate the hybrid search strategy developed in this work.

In this work, we propose a hybrid search strategy that addresses these challenges by combining a limited brute-force exploration with localized fine-tuning near the most promising candidates. While our approach shares similarities with Bayesian optimization in its goal of focusing computational resources on promising regions of the search space, it deliberately avoids constructing a full probabilistic model of the objective function. Instead, by systematically exploring the parameter space at a coarse resolution and refining around multiple high-performing candidates, our method is better suited for identifying families of acceptable solutions rather than a single global optimum. This structure is particularly advantageous in applications where a continuum of solutions along a low-dimensional manifold is desirable, and where exhaustive modeling would be unnecessarily complex or computationally prohibitive.

3. Methods

Overview of LIDAR segmentation via label diffusion

Figure 1. Overview of LIDAR segmentation via label diffusion

3.1 Original LDLS

The original LDLS algorithm constructed a matrix G that was then applied multiple times on a label state vector until the argmax of the output converged. The matrix G was constructed from two sub matrices that represent the flow of labels when the matrix is applied multiple times. The first matrix that G was constructed with is G2d→3d which represents the flow of labels from pixels in the image segmentation to the LIDAR points. G2d→3d as defined in equation-5 attempts to average the labels flowing into a LIDAR point by first projecting the LIDAR point to the image segmentation using only a small neighborhood of pixels near the projected LIDAR point. The second matrix that G was constructed with is G3d→3d which represents the flow of labels from one LIDAR point to nearby LIDAR points; G3d→3d as defined in equation-6 limits the flow of labels to neighbors by using only the K-Nearest-Neighbors and a distance-based weight. Once the matrices for label diffusion are defined for the pixels-to-points and points-to-points interactions, we can construct a much larger matrix G in equation-3 and a corresponding row-normalized matrix G in equation-4. Now that the matrix G is defined we can apply matrix multiplication until convergence as seen in equation-2 using the definition of the state vector in equation-1; here the state vector is composed of the LIDAR labels and the pixel labels. The pixel labels are defined by the image segmentation and do not change over time, while the LIDAR labels can be initialized to zero.

3.2 Hyperparameter Optimization of LDLS

Our implementation of hyperparameter optimization on the existing LDLS algorithm uses a combination of brute force, 5D interpolation, and dynamic resolution. There are 5 hyperparameters that need to be optimized which means the parameter space is relatively large. Just using 5 samples per dimension results in 3125 configurations that need to be tested, and each configuration takes about 2 minutes to test, which requires a total of 4 days to test with that limited resolution. Once the limited resolution brute force sample is done, a 5D interpolation can be computed to fill in missing information just before the brute force hyperparameter optimization is applied. At this point the proposed solution is merely a guess as it uses an interpolation on a limited resolution sample, so dynamic resolution can be implemented by sampling the parameter space a second time near the proposed solution. Once the new data is collected, the final proposed solution can be re-computed using the brute force method on the raw data.

Schematic of LDLS with hyper-parameters added to allow optimization via VirtuosoFigure 2. Schematic of LDLS with hyper-parameters added to allow optimization via Virtuoso

3.2.1 Hyperparameterization of LDLS

The first step of hyperparameter optimization is to add hyperparameters to the model. LDLS with the hyperparameters is outlined in figure 2. The algorithm requires an image and corresponding LIDAR point cloud as input. The LIDAR point cloud is randomly divided into two different point clouds according to the LIDAR Ratio hyperparameter – Point Cloud A and Point Cloud B. Separately, the input image is upscaled or downscaled via the Image Ratio hyperparameter and then passed through ICNet to segment all of the pixels. The resulting segmented image is then diffused into Point Cloud A based on the Number of KNNs and Max Iterations hyperparameters. Lastly, the diffusion process is repeated with the resulting point cloud and the aforementioned Point Cloud B according to the Number of KNNs – Post Processing hyperparameter.

  • Image Ratio: The amount with which the input image is downscaled. Range: 0.08 – 1.0, 7 samples.
  • LIDAR Ratio: The ratio of LIDAR points that are split to the two different diffusion steps – Image + LIDAR diffusion and LIDAR + LIDAR diffusion. Range: 0.2 – 1, 6 samples.
  • Number of KNNs: The number of adjacent points (K-Nearest-Neighbors) a given point uses for diffusion during the Image + LIDAR diffusion step. Range: 1 – 24, 5 samples.
  • Number of KNNs – Post Processing: The number of adjacent points a given point uses for diffusion during the LIDAR + LIDAR (post processing) diffusion step. Range: 1 – 8, 4 samples.
  • Max Iterations: The number of times the diffusion steps are repeated. Range: 10 – 100, 5 samples.

3.2.2 Limited Resolution Data Collection

The second step of hyperparameter optimization on LDLS is data collection; this consists of establishing full range and step values for each hyperparameter and then running all permutations of the hyperparameter values to evaluate inference time and accuracy. For our application, accuracy is defined as how closely the resulting segmentation matches the baseline result. Each hyperparameter has a predetermined range (see list above) resulting in 4200 total permutations to evaluate. Once the data was collected, we were able to analyze it to understand the impact each hyperparameter has on accuracy and inference time and the most optimal ranges for each hyperparameter.

3.2.3 Interpolation of 5D Data

Now that we have collected and analyzed the low-resolution raw hyperparameter data, the next step is to perform multi-dimensional linear interpolation to fill in the gaps and obtain precise theoretical data. This will allow us to more accurately approach a local maximum due to a lower minimum step size. Our interpolation methodology utilizes SciPy’s LinearNDInterpolator which ’triangulates’ the N-dimensional data with non-intersecting adjacent geometric shapes of N+1 points via Qhull to then perform linear barycentric interpolation involving the aforementioned geometric shapes.

3.2.4 Brute Force Hyperparameter Optimization

Once the interpolated data is gathered, we can use it to identify the optimal hyperparameter configuration that best fits the user’s latency requirement. To achieve this, we iterate across the range of Inference Time results in our interpolated dataset with a sample size of 100,000 and find the configuration with the highest accuracy that does not exceed the selected Inference Time via brute force searching. The result of performing this operation for all desired Inference Times is an Accuracy vs Inference Time curve where each point along the curve is an optimal hyperparameter configuration.

3.2.5 Validation and Dynamic Resolution

To ensure accuracy and verify our methods, we ran all of the optimal hyperparameter configurations along the curve through the original raw data collection methodology to get the true accuracy and inference time results for the configurations we obtained. Due to low resolution of the original data, there is still the potential that the results do not accurately reflect the true maximum that we seek. Thus, we attempt to apply dynamic resolution by incorporating the new raw data into the original raw data and repeating the process of section 3.2.3. The aim of this is to have the additional resolution around the local maximum provide more insight into whether there are any greater maxima nearby. Ideally the process should stop when the validation succeeds, but the process ends after only a few runs with the final brute force optimization being applied to just the raw data.

4. Results and Discussion

4.1 Single Hyperparameter Effect on LDLS

All Hyperparameters have a predetermined range, step value, and base value. To analyze the individual effects of each hyperparameter, we have selected data points for each hyperparameter where all hyperparameters except the one being analyzed are set to their base values (maximums) and the one being analyzed has its entire range of values included. This allows us to analyze the effect of each individual hyperparameter while all other variables are kept constant. The results were combined into the two graphs seen in Figure 3: one for Inference Time and one for Accuracy. It should be noted that the hyperparameter values have been normalized from 0.0 (minimum) to 1.0 (maximum) across their respective ranges.

Plots showing the effect on LDLSFigure 3. Plots showing the effect on LDLS when changing each hyperparameter separately. The hyperparameters are normalized to be between 0.0 (0%) and 1.0 (100%) to fit on the same plot.

The Image Ratio hyperparameter had the greatest effect on both Inference Time and Accuracy, as seen in Figure 3, meaning it should serve as the primary hyperparameter to adjust when looking for Inference Time optimization. However, it’s also important to analyze the ratio of Inference Time reduction to Accuracy loss. Even if the actual Inference Time reduction is not large, the Inference Time reduction can be worthwhile provided the corresponding loss in Accuracy is also relatively low. This is best seen in Image Ratio and Num KNNs ranging from 1.0 (100%) of their respective ranges to 0.2 (20%) where the Accuracy loss is minor compared to the Inference Time reduction. On the other hand, there is a significantly larger loss in Accuracy in the bottom 0.2. Excluding Num KNNs Post Processing, the remaining hyperparameters show a similar pattern but the increased Accuracy reduction in the bottom 0.2 is minimal compared to Image Ratio and Num KNNs.

4.2 Optimized Hyperparameters

The 5D data is distilled into the optimal configuration that gives the best accuracy given a particular latency requirement; this should filter out most of the configurations and generate a 1D curve for the optimal hyperparameters when plotted against the latency constraint (see figure 4 and 5). The optimized hyperparameter curves generated when using just the raw data can be seen in figure 4. Here the raw data was generated using 5 passes of the hybrid brute force method with dynamic resolution. The dynamic resolution was created by interpolating the 5D data and finding potential optimal configurations, validating the proposed solution by benchmarking the configurations, and then combining the old raw data with the new validation data and repeating the process. We hoped that incorporating the validation data into the next iteration would be enough to explore the parameter space near the optimal solution, but figure 4 shows this didn’t work that well as there is significant noise for the max iteration, LIDAR ratio and KNN post process hyperparameters. The noise could potentially be caused by a weak correlation on latency, but figure 5 was generated using the interpolated 5D data which shows improved correlations for the max iteration and LIDAR ratio hyperparameters. Unfortunately the KNN post process still shows a rather weak correlation, but this is most likely caused by the relatively low compute cost of the hyperparameter as it is used on the equivalent of a single iteration of label diffusion to move labels from the processed point cloud to the unprocessed point cloud. Using the iterative brute force method with dynamic resolution we were able to propose the optimal hyperparameters for a given latency constraint that would provide the best accuracy.

The proposed optimal hyperparameters are derived from 5D interpolated data, so the proposed solutions require validation to check that the interpolations are valid. Validation of the interpolation can be done by benchmarking the LDLS algorithm at the optimal configuration for various latency requirements. The configuration for each latency requirement can be found by using the optimized hyperparameters in figure 5 where the predicted latency is the latency requirement while the predicted accuracy is taken from the top left plot of figure 5. Using the new validation benchmarks, we can then plot the observed accuracy/latency against the predicted accuracy/latency in figure 6. In the best case the observations should be exactly equal to the predictions producing a straight line. The observed accuracy was generally close enough to the predicted accuracy, but the observed latencies had a shift upwards by about 10ms over the predicted latencies. Since the accuracy was generally as predicted, the latency shift should also shift the accuracy vs latency curve in figure 5 by about 10ms for any data above 35ms. Despite the shift the optimal configurations should still be close enough for deployment on a UGV.

Optimized parameters via raw outputFigure 4. Optimized parameters via raw output

Optimized parameters via 5d interpolationFigure 5. Optimized parameters via 5d interpolation

Validation of the optimized hyperparametersFigure 6. Validation of the optimized hyperparameters found via 5d interpolation. In the ideal case the observed accuracy and inference time should be exactly equal to the predictions (blue line).

4.3 Final Comparison: Single vs Optimal

Final comparison between changing only one hyperparameter vs changing all hyperparameters using the optimal solutions.Figure 7. Final comparison between changing only one hyperparameter vs changing all hyperparameters using the optimal solutions.

Lastly, we can compare the effects of single-parameter-optimizations from section 4.1 to the results obtained from the interpolated full-hyperparameter-optimization in section 4.2. In figure 7 we combine both plots from figure 3 to create a single Accuracy vs Inference Time plot which allows direct comparison with the existing Accuracy vs Inference Time curve from Figure 5. This allows us to assess how well the optimal configuration we have reached compares against how well a given configuration can perform if only a single hyperparameter were optimized. By analyzing this graph, we can see how strongly the optimal configuration mirrors the overall shape of the Image Ratio hyperparameter’s curve, indicating how much greater of an influence that single hyperparameter has on the overall performance of the model. The additional gap in Inference Time and Accuracy between the two curves can be attributed to the summation of the less-impactful hyperparameters which, while not as important as Image Ratio alone, still make a significant difference when combined. In other words, using the optimal hyperparameters results in greater Accuracy given the same Inference Time requirement than only optimizing a single hyperparameter.

5. Conclusions and Future Work

We have shown the introduction of hyperparameters to LDLS can reduce the algorithm’s computational cost while still maintaining a reasonable accuracy. The LDLS algorithm already contained native hyperparameters such as image ratio, KNN, and max iterations. We introduced two more hyperparameters by first applying the LDLS algorithm to only a subset of the LIDAR points, and then performing a final label diffusion step combining both the processed LIDAR points and the unprocessed LIDAR points. We attempted to optimize the hyperparameters for various latency requirements by using a combination of a limited-resolution brute force method and dynamic resolution to decrease the compute cost compared to a standard brute force approach. The purpose of the dynamic resolution was to refine the search space along the curve of optimal configurations, but the validation of the interpolation showed that just incorporating validation data in an iterative way was not adequate to identify a more ideal configuration. A possible solution to the problem would be to use a limited brute force search through all dimensions in more directions than just along the original optimal configuration curve. However, the original limited brute force method used roughly 5 samples per dimension; while the new brute force search would be more focused around the original optimal solution, the total number of points to search is still limited by the hardware. This means the new search will likely also use roughly 5 samples per dimension or less. Despite the problems, the final data can be shifted to account for the interpolation errors and the optimal configurations can be used for deployment. The next step to the hyperparameter optimization would be to add a scheduler to the LDLS ROS node to modify the hyperparameter values at runtime based on available hardware resources and latency constraints.

For our future work we want to link the hyperparameters for each algorithm to particular tasks and then optimize the configuration based on the current task. The current paradigm is for each algorithm to be optimized for maximum throughput for a reasonable drop in accuracy, but we usually find the limited compute resources are overwhelmed leading to low throughput for all algorithms despite the optimizations. By grouping algorithms to tasks and then optimizing for the current task we hope to provide the UGV with greater adaptability allowing for efficient usage of the limited resources.

Acknowledgement

The research was supported by U.S. Army AI for Maneuver and Mobility (AIMM) Essential Research Program (ERP) at DEVCOM Army Research Laboratory.

References

[1] Iii, B. E. G., Dasari, V. R., Wang, P., and Alexander, D. M., “Efficient normalization techniques to optimize ai models for deployment in tactical edge,” in [Disruptive Technologies in Information Sciences V], 11751, 53–57, SPIE (Apr. 2021).

[2] Dasari, V. R., Geerhart, B. E., Wang, P., and Alexander, D. M., “Deep neural network model optimizations for resource constrained tactical edge computing platforms,” in [Disruptive Technologies in Information Sciences V], 11751, 47–52, SPIE (Apr. 2021).

[3] Geerhart, B., Dasari, V. R., Rapp, B., Wang, P., Wang, J., and Payne, C. X., “Quantization to accelerate inference in multimodal 3d object detection,” in [Disruptive Technologies in Information Sciences VIII], 13058, 58–67, SPIE (June 2024).

[4] Robbins, H. and Monro, S., “A stochastic approximation method,” The Annals of Mathematical Statistics 22(3), 400–407 (1951).

[5] Kingma, D. P. and Ba, J., “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980 (2014).

[6] Bergstra, J. and Bengio, Y., “Random search for hyper-parameter optimization,” in [Journal of Machine Learning Research], 13, 281–305 (2012).

[7] Snoek, J., Larochelle, H., and Adams, R. P., “Practical bayesian optimization of machine learning algorithms,” in [Advances in Neural Information Processing Systems], 25 (2012).

[8] Yu, T. and Zhu, H., “Hyper-parameter optimization: A review of algorithms and applications,” arXiv preprint arXiv:2003.05689 (2020).

[9] Bischl, B., Binder, M., Lang, M., Pielok, T., Rahnenfuehrer, J., Thomas, J., Weihs, C., and Zaefferer, M., “Hyperparameter optimization: Foundations, algorithms, best practices and open challenges,” arXiv preprint arXiv:2107.05847 (2021).

[10] B¨ack, T., [Evolutionary Algorithms in Theory and Practice: Evolution Strategies, Evolutionary Programming, Genetic Algorithms], Oxford University Press (1996).

[11] Karl, F., Waldhauser, C., and Zaefferer, M., “Multi-objective hyperparameter optimization in machine learning: An overview,” Artificial Intelligence Review (2022).

[12] Ilemobayo, T., Ojo, O., and Oyeleye, O., “Hyperparameter tuning in machine learning: A comprehensive review,” Journal of Engineering Research and Reports 26(6), 388–395 (2024).

Author Biographies

Billy Geerhart III is a research physicist at the US Army Research Laboratory, Aberdeen Proving Ground, MD. He received his MS in physics at Oregon State University with a focus on computational physics. His work at ARL includes quantum photonics, software defined networks, adaptive computing, machine learning model optimization and deployment on unmanned ground vehicles.

Shane Muller received a B.S. degree in Computer Science from the University of Delaware, United States in 2022. Having performed undergraduate AI research alongside a fitness-focused startup company, his current research interests include the deployment and optimization of advanced perception models onto autonomous robotics platforms for real time inference.

Dr.Venkat Dasari is the project lead for Generalized Neural Network model optimization project at DEVCOM Army Research Laboratory (ARL), primarily conducting research to develop a model and platform agnostic neural network inference acceleration algorithms and architectures for resource constrained edge computing platforms. Dr. Dasari led a team to support an OSD funded Future Autonomous Battlespace RF with Integrated Communications (FABRIC) program to provide communications in contested network and his contributions include creating a simulator to validate network models and protocols, optimizing beam angles to resolve interference issues. During his tenure at the Department of Justice (2010-2012), he contributed to the functional enhancement of JUTNET (Justice Telecommunication Network) to support real-time applications like VoIP. Dr. Dasari received his Ph.D in Immunology from Osmania University, India in 1993, and a Master’s Degree in Computer Sciences from Temple University, Philadelphia in 2000.

David Alexander graduated in 2016 from the University of Delaware with a BS in Computer Science. He then joined the Army Research Lab to study computer networks through simulation, as well as developing visualizations of simulator data. He later began work on computational optimization, focusing on DNN algorithms used on autonomous systems.

Peng Wang received the M.S. degree in electrical engineering from North Carolina State University, Raleigh, in 2003, and the Ph.D. degree in electrical engineering from the University of Delaware, Newark, in 2009. His research interests include Deep Learning model optimization, edge computing, Deep Learning for blind signal classification, network optimization, System modeling and simulation, Cognitive Radio Networks, and Congestion control.

ITEA_Logo2021
ISSN: 1054-0229, ISSN-L: 1054-0229
Dewey Classification: L 681 12

  • Join us on LinkedIn to stay updated with the latest industry insights, valuable content, and professional networking!