Verification and Validation Methods - A Conversation with John Frederick | ITEA Journal

SEPTEMBER 2024 I Volume 45, Issue 3

John Frederick
Mr. John Frederick

Verification and Validation Methods:
Driving Solutions for the Evolution of the Future National Airspace SystemMr. Frederick is Manager of the Verification and Validation Strategies and Concepts Visualization Branch at the FAA’s William J. Hughes Technical Center.

A Conversation with Mr. John Frederick Interviewed by J. Michael Barton, Ph.D., Parsons Corporation

Q:        Did you have any early influences on your career?

A:        My father was as a big influence. He worked for IBM as a customer engineer, one of the original men-in-black guys who worked on mainframes, wore suits, and carried an IBM-issued briefcase. Everyone wore the same white shirt and the same dark tie. They kept all the big mainframes running at all the big companies in the Philadelphia area and they were heroes. They would come in on the night shift and keep the computers running. He was a big influence on me in terms of computers and technology. I remember him bringing home a fiber optic cable when I was little, showing me how you can shine a light at one end and it comes out the other end. He was also an influence on me in terms of creativity, bringing creativity and resourcefulness because you had to be resourceful and inventive to be a customer engineer and figure out how to get the computers up and running; some things just weren’t written down and you had to figure things out, improvise, and find a solution.

Q:        Your first job was an internship supporting test and evaluation for upgrading the computer system for the En Routes Air Traffic Control (ATC) system of the National Airspace System (NAS). Did that internship determine your career path in T&E?

A:        When I was in high school, I was thinking about going into the arts and drawing. I was working on an art portfolio to go to art school. I was either going to do computers or art. Today you could choose both, back then they were two different careers; but my father was a big influence. I chose computers because it was more of a sure thing. Going into the arts wasn’t a sure thing and I wasn’t that confident in my ability to be a thriving artist. I went towards a computer-oriented career and computer sciences, but I always felt that I wasn’t happy unless I was bringing creativity to my work. I always felt that I emulated the creativity and resourcefulness that my father had. My first job at the Tech Center (the FAA William J. Hughes Technical Center) was working with IBM. They were working on mainframes, in competition with Sperry, for the NAS mainframe systems that run the En Route Control Centers. I managed the IBM test shots, helping in the laboratories for tests and demonstrations for the competition phase, and I really tried to make an impression. I even automated their scheduling program, which was a manual process. I was trying to make an impression and get my foot in the door and impress them by automating their own manual process. That was the start, but I never thought that I wanted to go into testing. I didn’t even know testing was a thing.

Q:        How did you end up with a career in testing?

A:        I wanted to do coding but initially my job was involved with testing, then I eventually got into coding work on NAS software. But in doing that, I also gained expertise in testing the software and going into the labs and creating my own SIM (simulation) scripts and testing the different interfaces with all the NAS systems. I gained experience doing that and later became a test director in software development. I also gained a desire to do it because when I was coding, I felt disconnected from the user. When you’re that young, you’re just starting out, you’re sort of a coding factory. When you leave your computer, you go into labs on mid shift or third shift for your lab runs, you get the worst lab slots. I was anxious to have more connectivity to the air traffic controllers and the field users, people who utilized our software tools and systems. So that was a motivation for me to be more directly connected to the field. Eventually I became an operational test director for operational testing and demonstrations, working on the advanced air traffic control system capabilities. I felt it was cool because the NAS is so complex. A lot of the coders specialize in working on specific elements of the NAS. Some people are really good at display services, some people work on flight data processing, and some people work on surveillance. I worked on conflict probes, and it wound up evolving into the ability for the controllers to do trial planning. They can probe and determine if they’re going to clear a flight plan, or if it’s going to come into conflict with other flights. Based on the trajectory it used different algorithms for determining if they would come into conflict with another aircraft.

There is a software module that creates conflict alerts if aircraft break a certain separation standard. The software issues early alerts that a conflict may be coming and then eventually will trigger a conflict alert. The system wants to maintain aircraft within a virtual “hockey puck” around them, a volume of horizontal and vertical separation. If anything breaks within that zone, the conflict alert goes off. Also, as the performance of aircraft have changed, the algorithms had to change. As aircraft became higher performance, there were more nuisance alerts where they weren’t really in conflict, it was just the performance of the aircraft was throwing the algorithms off because they were showing them to be at one spot and they weren’t going to be there. We had to recode them to stop these nuisance alerts. In testing that software you can imagine having to write SIMs that create conflicts and different thresholds. The SIM scripts create these events to exercise the software. It was a good thing to work on because it was relevant to the new automation, in terms of providing greater automation to the controllers and being able to work on SIM scripts that operationally test different situations for the controllers.

Q:        We are seeing very rapid growth of autonomy in ground and air vehicles. FedEx, UPS, and Amazon among others are interested in autonomous air delivery. Sharing the airspace with autonomous vehicles must present a huge air traffic control challenge.

A:        Well, I think as the technology advances we’re redefining or refining what autonomy means and what an autonomous system or capability is. I mean, a washing machine is autonomous, but it has a narrow bound for making decisions. But I think that some of the new, more exciting stuff that’s changing in the national airspace system is the new types of aircraft, new entrants that are coming in like electric vehicles, drone operations and autonomous drones or unmanned vehicles. Then there’s urban air mobility. The FAA is working with AFWORKS (the innovation arm of the Air Force Research Laboratory exploring electric vehicles and advanced air mobility). The Air Force also has a program called Agility Prime (the Air Force vertical lift program that is partnering with the electric vertical takeoff and landing commercial industry) where they’re inviting industry to show them what they can do. The FAA is interested in electric vehicles and what they can do in urban settings and how well they perform within it. They can’t be like helicopters; they can’t be noisy like a helicopter. They have to land a certain way near buildings and in airports. These companies are very motivated to make these things work. We know just from the aerodynamics that smaller vehicles are more sensitive to wind. If you’re in an urban canyon, gusts between buildings can be disaster for a drone or UAV. So how do you design an engine or a capability that can fly within that environment, be quiet, can hold the charge, etc.? I think we’re going in the right direction to look at short destinations, such as hopping from an urban facility back and forth to an airport, like here in Atlantic City. One scenario is to fly into the Atlantic City airport, walk to the top of the parking garage, and take an electric air mobility platform to the top of a casino. In this scenario, we are looking at air corridors for that. The number of air mobility providers and suppliers is really booming. Even commercial space is starting to thrive. We’re seeing more and more launches and there are new types of aircraft, such as the Virgin Galactic vehicle.

Q:        Do you see T&E as a key to safe operation and should it drive the technology as we bring in autonomous vehicles?

A:        I think more and more becomes obvious to me because there are things that the FAA wants to do in terms of changing the architecture of the NAS and moving to a different type of service-based architecture and going to the cloud. There are other things that we’re talking about doing that are almost impossible to me in my mind without making testing or verification and validation (V&V) burned into your practice, because we don’t know what we don’t know. The only way to address this is continual V&V. I’m looking at a sort of agile approach, not just agile in the formal process, but agile as an adjective in terms of being iterative, continuously iterating your whole acquisition. In the old way to do acquisitions, we wrote these big requirements documents, we baselined them, then we wrote specs, and then we evolved the specs, very sequentially. Now we’re doing much more complex things; we’re working in some uncharted territory; it requires an evolving approach. My idea, the idea of continual V&V, is to ask what can we do to validate the concept before we move forward? Continual V&V puts the squeeze on the cone of uncertainty. This is really important in the early phases of acquisition where waterfall-type programs try to lock in requirements at the point where you have the least knowledge. You need continuous feedback and build it into your process.

Systematic V&V will be critical and central to successfully and safely implementing complex, diverse concepts of operations of the National Airspaces System. There will be diverse vehicles and new operations in the NAS that drive the need for new service providers and infrastructure enhancements. We will have unpiloted vehicles, vehicles coming from space, others operating low to the ground, and new types of takeoff, such as air-launched. Incorporating these new types of NAS users and their anticipated high number of operations requires alternative service methods beyond conventional Air Traffic Management, as shown in Figure 1.


Figure 1 A conceptual view of the future evolution of National Airspaces System

 

The DoD has the capability for continuous feedback, and I have been trying to instill it in my management in the FAA. But you can’t do it unless you prepare for it and have the right people to do it. You need people and resources and platforms to do it. We need to have the environments to test these new concepts and the tools and the people to do it as soon as we have a concept to test. It’s hard to convince leadership to invest in making sure the people are there to do it and making sure the necessary tools are available. But it’s absolutely necessary to do and it will save you money. In fact, you may not even be able to accomplish it at all without having these elements in place first.

I like to think about a simplistic analogy: building the pyramids. They could have started just building the pyramids and putting the big blocks into place. But if they didn’t invest in building the dirt ramp, they could never have gotten to the top and it probably took years to build that dirt ramp. In order to reach the higher pinnacles, you need to invest in the dirt ramp, otherwise some problems along the way may be insurmountable.

Right now, I’m emphasizing a standardized training program for testing, and I think that will also help us in retaining personnel. I want to establish a better training framework. We develop some in-house training, we use ITEA for the Design of Experiments training and the Fundamentals of T&E, and the FAA has courses. If you have smaller teams, they have to get up to speed quicker and you don’t have much overlap in skills. AFOTEC and other DOD organization have well-established T&E training programs that we at the FAA should try to emulate. I want to look across the board for the Agency and for our T&E practitioners at what others are doing.

Q:        You were involved in V&V early on and became the FAA champion for it. How did that come about?

A:        Well, it started when I was a test director, and we were trying to write the test and evaluation guidelines. But there was no one responsible for doing it. It was just a consortium of people that shared time. There was no one group responsible for maintaining it; it was just supposed to happen among everyone. We were a consortium, everyone agreeing to get together, we need to write this, so everyone would break from their normal work and support it. I was always trying to drive it. There needs to be someone in place who will keep the fire burning while everyone has heads down doing their test programs. Someone must make sure that the processes reflect reality and they’re relevant and grow along with it. It’s almost impossible to do things the way you would like while your head is down doing a test program. I was always driven to try to form a group that was responsible for representing the test groups and representing a standard for how we test in the FAA. There was a period when we had different parts of the NAS, like the surveillance part, the communications part, the enroute automation part, the terminal automation part, and everyone specialized in doing their piece of the NAS and there were testers that specialized in doing them. Engineers and everyone had their own way of doing things. There wasn’t a common standard or wasn’t anyone overseeing it and trying to connect the dots to say, oh, we learned something over here, we learned something else over there. Our VP at the time was an ex-controller who noticed that the reports looked different and were not consistent and they had different levels of fidelity to them. That’s when I got management support to develop a common test and evaluation handbook that would standardize testing. There was a policy in place, but there wasn’t an institutionalized handbook that put the stake in the ground on how we operate. We wrote a white paper before we did the handbook, and we looked at how other agencies, including the DoD, did testing. Then we came up with something that met our standard practice. We developed the first version of the test and evaluation handbook for the Agency.

Another underlying thing that was always driving me was to elevate the role of T&E in the Agency. There was a system called AAS, Advance Automation System, which was a big undertaking by the Agency to change the NAS and advance the NAS. About a third of the way into it, AAS ran into problems and was cancelled. Test and evaluation took the blame because they were the messenger. That’s when the FAA broke away from the FARs and created the AMS, Acquisition Management System. The next step down was to take away the requirement to do TEMPs (Test and Evaluation Master Plans); TEMPs were no longer required. I’ve made it my mission to put test back into play and I pushed to get TEMPs reinstated as a required acquisition product. By reinstituting a TEMP and requiring it in the early decision points of the lifecycle, it drove the need to have test and evaluation, with the program manager reviewing the requirements and reviewing and developing a TEMP to support the investment decision. TEMPs were reinstituted recently, maybe within the past four or five years. My role is as a T&E stakeholder for the Joint Resource Council. Not only do we require TEMPs, but I also now report back to the Joint Resource Council if the TEMP is viable, that it’s consistent with policy and other requirements. There is the FAA Joint Resource Council (JRC) checklist, and I verify the TEMP for the JRC Final Investment Decision. I am also on the other end; I support an In Service Decision checklist. (The In Service Decision checklist is used for acquisitions to go operational.) I certify or sign off on the checklist items to confirm that a program followed policy and reported out on the program in accordance with AMS. I help in the beginning and the end.

Q:        How did your involvement with ITEA begin?

A:        ITEA had a fairly strong chapter here at the FAA Tech Center, South Jersey Chapter. They hosted one of the international symposiums in Atlantic City one year. I started getting involved and the local chapter president was looking to retire and hand off the baton, so he handed it to me. I’ve always valued the insight and relationships that I’ve had with ITEA. It always gives me a great perspective and I always have something to bring back to the FAA. We had a speaker series hosted at the Tech Center. We aren’t allowed to serve on the Board of Directors, but we can serve as advisors. Since COVID, everything in the Chapter has slowed down; we’ve never really recovered.

Q:        How is the FAA addressing digital engineering or digital twins?

A:        We are exploring digital twins and digital environments as a way to improve simulation environments and make them more digital. We are exploring the possibilities for iterative V&V to help inform and validate concepts and produce better requirements. But first we really need to understand what a digital twin or digital environment means to the FAA. For example, we use simulators, but they aren’t digital twins; you can’t change their behaviors or performance. With a digital twin you can change those things, you can replicate a system or environment before it physically exists. We can test automation standards for aircraft separation, we can test airport operations. We can test intelligent agents as actors (e.g., a pilot, a facility) in the digital environment of the NAS. We can do what-if studies to look at arrival standards for air traffic control and how they are affected by urban air mobility. We can use these tools to reduce the uncertainty; we can understand thresholds using digital environments. But there is a big investment required. It’s difficult to come up with behavioral rule sets for the actors. We probably need machine learning to do it comprehensively and for it to be timely. For this, we need to capture a lot of data and build the rules. The FAA is doing some rudimentary work with agents as pilots, acting as simulation pilots to debug scenarios. The full runs use real pilots, but using the agents for debugging is very time and cost effective.

Q:        Are there new technologies you are looking into specifically as test tools?

A:        We were talking about engineers not being good writers and talking about ChatGPT, which will write material for you. We were asking ourselves what it would take to train a chat bot to write TEMPs. Because resources are scarce and we’re trying to do more with less and trying to spend less time reviewing documents and writing them. That frees up more time to be in the labs. So, I have a couple people on my team looking into that. We’re just getting started. You have to run it in a closed loop; you don’t want everything out in the cloud or out on the Internet.

Q:        Have you found any magic bullets for identifying, recruiting, and retaining a highly skilled technical workforce? Do you have an FAA internship program?

A:        It’s something we’ve been talking about, a lot of people have been talking about, but we don’t have a magic bullet. I do whatever I can to make testing attractive to young people. I like to think my V&V Summits help make testing interesting. And I also think it goes a long way for testers to hear upper management emboldening the testers and speaking proudly about their role, in terms of maintaining and attracting a workforce and making people feel good about what they’re doing. Our internship program is strong and has had some really good champions. I currently have two interns supporting my branch.

One of the areas you might be interested in, which I think makes testing interesting, is another way to approach testing, or another way to look at V&V. We do animated storyboards, which are animated representations of concepts and requirements instead of a document. Most of the interns that come to work in my branch support the storyboard team. It’s part art and part science, where you take technical information and put it in a visual way that you can understand and validate. So rather than trying to extrapolate technical information from SMEs and rather than just documenting it in a CONOPS (concept of operations) or an operational requirements document, we put it in a storyboard that demonstrates specific use cases, like reentry of a spacecraft or what happens when a drone loses signal around airport. We examine all kinds of situations or new procedures for air traffic control approaches, such as interval management that shows how the new automation provides efficient movement of aircraft, so planes are not throttling up and throttling down, going up and down in altitude, they have smooth approaches to airports. For all these different things we do storyboards based on CONOPS and representative of them. In a sense, it’s a cheap way of validating concepts. You get everyone in the room, demonstrate with the storyboard, and you ask if this is what we really want, and if yes, then you move out. The storyboard now represents the system to move forward to communicate it. It chips away at that thing we talked about, continuous validation, where you’re able to build it into your process with storyboards, in addition to CONOPS, and you evolve them and use them to communicate. As we move to more complex systems of systems or services of services, there are more stakeholders involved, lots more people you have to get on board. How do you do that? There has to be a better way of communicating and letting everyone know what they’re accountable for. Storyboards help with that. We’ve evolved these in my branch as a new capability for the Agency. We’re the only ones in the Agency doing it right now. We’ve standardized our method for doing it and we have a process for when we meet with the stakeholders.

Then we develop a storyboard design document, like you were developing anything else, that states what you want to do, what are the scenarios, who is your intended audience, etc. We document it and that is the basis for the developers to generate it. We use a combination of engineers and graphic artists. They’re simple animations, not like Pixar or anything. You want to be able to create them quickly and you don’t need a high-performance computer to drive them. They’re simple animations and you want to be able to tweak them, you don’t want a big overhead to change them because things change. So, you develop an event sequence because much of it is procedure related. Things happen, this happens then that happens; you have to understand that and allow people to understand the process. Our interns like this because they can usually step right in and start doing the animations right away. You don’t need a broad technical background. Then they work with an engineer and the engineer helps them extrapolate the CONOPS and work with the stakeholders. In three to six months of their internship they can have a completed storyboard or at least close to completed storyboard product.

Q:        What do you consider to be some of your career highlights?

A:        I established the Test Standards Board (TSB), which had never existed in the FAA, and I am really proud of that. The FAA TSB was a major accomplishment. It provides the FAA with a small group of T&E experts that oversee and maintain T&E practices, policies, and strategies for the agency. I led development of the T&E Handbook as a standard, and I helped develop T&E policy for the Agency, which had also never existed. I think our V&V Summits have grown and become valuable, entertaining, and stimulating. If you entertain someone, they learn and they retain. The Summit now attracts high caliber participants because of the reputation it has established. The U.S. Department of Transportation now wants to participate. We have a collaboration with INCOSE, and we publish in their Insight publication. The Summit is growing and expanding.

Q:        You have been running the V&V Summit for nearly 20 years. How did that come about?

A:        When we were standing up the TSB, we needed a venue to bring the various organizations together. We wanted something small and internal to the Tech Center to address how to standardize across test organizations, to discuss standards and best practices. We looked at what the Department of Defense was doing, as well as industry and other government agencies. The first one attracted about 50 people and we discussed best practices. Then we grew it Agency-wide to include broader goals and expectations. Then we included Industry to bring together other practitioners to talk T&E strategy. We wanted to understand the vision and role of T&E outside the FAA and bring it to the FAA, to make it convenient for FAA employees to participate in the discussions. After that, we made academic connections and connected with INCOSE. We want to think about T&E differently. We want to evolve the culture, to move it forward to enhance the NAS.

Today we get about 250 attendees and we want to keep it evolving and extend it beyond the FAA to the Department of Transportation (DOT). The V&V Summits address innovative methods and strategies that embrace V&V philosophies and principles critical to the effective evolution of the NAS. The V&V Summit’s goals are to foster a corporate V&V philosophy, explore innovative methods, and promote industry best practices. V&V Summits explore new ways to apply V&V, optimizing acquisitions and improving decision-making. Presentations focus on promoting an organizational culture that adopts new approaches, innovates improved concepts, and implements effective solutions for the future of aviation. We need the DOT-equivalent of the Test Resource Management Center to help us standardize and coordinate across the DOT. The TRMC was created by Congress, so maybe it will take an act of Congress to make it happen in the DOT.

Starting in 2025, V&V Summits will be scheduled annually in May. The Summit will present and discuss challenges, strategies, and lessons learned for operationalizing past, present, and future innovative concepts. Presenters will provide related perspectives and assessments on people, processes, technologies, and organizational culture and there will be roundtable dialogue on the challenges, problem statements, best practices, useful methods, and lessons learned. We also provide networking time – it’s key to getting people communicating, talking with other T&E professionals they don’t necessarily interact with daily, or at all. Shortly after the V&V Summit concludes, we will produce a white paper detailing presentations, discussions, and findings.

Q:        Do you have any closing remarks or observations?

A:        In hiring people into T&E, look for a mentality beyond a basic systems engineer. Look for a truth seeker, someone willing to test and question, who brings a lens to look closer at an issue. A tester needs to have an engineering mentality plus an operational mentality; they need to know how a system will be operated and by whom. They need to know how the user will interface with it, how it will be integrated into the larger whole, how it will be maintained. They need to be someone who will take pride and be emboldened in their role and not be threatened or intimidated when they receive pushback. They need to be objective and transparent in their evaluations and reporting, and be ready to report on the good, the bad, and the ugly for the good of the mission.

ITEA_Logo2021
  • Join us on LinkedIn to stay updated with the latest industry insights, valuable content, and professional networking!