Please use this identifier to cite or link to this item:
https://dair.nps.edu/handle/123456789/5348
Title: | Knowledge based Metrics for Test and Design |
Authors: | Craig Arndt Valerie Sitterle Jeremy Doerr |
Keywords: | Test evaluation metrics |
Issue Date: | 22-Apr-2025 |
Publisher: | Acquisition Research Program |
Citation: | APA |
Series/Report no.: | Acquisition Management;SYM-AM-25-314 |
Abstract: | The task of developing the best military equipment in the world has long fallen on the U.S. Department of Defense and the military industrial base that supports them. The United States made the decision years ago (and have succeeded in going to war with the best equipment since the second half of WWII (1943)) that their military would have the best equipment in world. As the 21st century continues to unfold, this commitment is becoming ever more difficult and more costly, and hard to execute in a timely manner. Over the past few years, the leadership of the DoD acquisition community have listed the acceleration of development testing and fielding systems as their top priority. To try to make this happen, the DoD is implementing digital transformation. Another major part of accelerating the acquisition process has been a movement to integrate the design and test functions of the acquisition process. This includes moving test earlier in the development process. When looking at the test and acquisition process, it is important to understand what the goal of test is in the development process. Traditionally, the goal of test has been to validate that a design will meet specific requirements created for the system. This traditional goal, however, is becoming less relevant, and the role of test as part of the development process is consuming much more test resources. So, what is the goal of test? If the role of test is to help ensure that we are developing the best product for our customer, then we might think of test’s role being to increase knowledge about the future performance of a system still in design while there is time to improve the design. At a practical level this means two things. First, that testing should be designed specifically to support decision-making; the development of the Integrated Decision Support key (IDSK) was intended to support this goal. Second, that we need to integrate all activities that provide additional knowledge about the future performance of the system together in meaningful ways to support decision-making. In order to integrate and measure the amount of knowledge needed to make specific decisions (about things like requirements, risk, system design, and test resource allocation), we need to be able to measure the amount of knowledge needed for decisions and the amount of knowledge that we expect to generate in a given activity (including design, test, or history). In this paper, we will demonstrate the development of a mathematical based knowledge metric and how it can be applied to specific DoD acquisition and test decision-making. The paper will document the development of the decision add and use it in practical programmatic decisions. |
Description: | SYM Paper / Presentation |
URI: | https://dair.nps.edu/handle/123456789/5348 |
Appears in Collections: | Annual Acquisition Research Symposium Proceedings & Presentations |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
SYM-AM-25-314.pdf | SYM Paper | 1.48 MB | Adobe PDF | View/Open |
SYM-AM-25-314.pdf | SYM Presentation | 2.17 MB | Adobe PDF | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.