Please use this identifier to cite or link to this item:
https://dair.nps.edu/handle/123456789/5389
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Brian Mayer | - |
dc.contributor.author | Jaganmohan Chandrasekaran | - |
dc.contributor.author | Erin Lanus | - |
dc.contributor.author | Patrick Butler | - |
dc.contributor.author | Stephen Adams | - |
dc.contributor.author | Jared Gregersen | - |
dc.contributor.author | Naren Ramakrishnan | - |
dc.contributor.author | Laura Freeman | - |
dc.date.accessioned | 2025-05-02T17:01:31Z | - |
dc.date.available | 2025-05-02T17:01:31Z | - |
dc.date.issued | 2025-04-02 | - |
dc.identifier.citation | APA | en_US |
dc.identifier.uri | https://dair.nps.edu/handle/123456789/5389 | - |
dc.description | SYM Paper / SYM Panel | en_US |
dc.description.abstract | "As large language models (LLMs) continue to advance and find applications in critical decision-making systems, robust and thorough test and evaluation (T&E) of these models will be necessary to ensure we reap their promised benefits without the risks that often come with LLMs. Most existing applications of LLMs are in specific areas like healthcare, marketing, and customer support and thus these domains have influenced their T&E processes. When investigating LLMs for government acquisition, we encounter unique challenges and opportunities. Key challenges include managing the complexity and novelty of Artificial Intelligence (AI) systems and implementing robust risk management practices that can pass muster with the stringency of government regulatory requirements. Data management and transparency are critical concerns, as is the need for ensuring accuracy (performance). Unlike traditional software systems developed for specific functionalities, LLMs are capable of performing a wide variety of functionalities (e.g., translation, generation). Furthermore, the primary mode of interaction with an LLM is through natural language. These unique characteristics necessitate a comprehensive evaluation across diverse functionalities and accounting for the variability in the natural language inputs/outputs. Thus, the T&E for LLMs must support evaluating the model’s linguistic capabilities (understanding, reasoning, etc.), generation capabilities (e.g., correctness, coherence, and contextually relevant responses), and other quality attributes (fairness, security, lack of toxicity, robustness). T&E must be thorough, robust, and systematic to fully realize the capabilities and limitations (e.g., hallucinations and toxicity) of LLMs and to ensure confidence in their performance. This work aims to provide an overview of the current state of T&E methods for ascertaining the quality of LLMs and structured recommendations for testing LLMs, thus resulting in a process for assuring warfighting capability. " | en_US |
dc.description.sponsorship | Acquisition Research Program | en_US |
dc.language.iso | en_US | en_US |
dc.publisher | Acquisition Research Program | en_US |
dc.relation.ispartofseries | Acquisition Management;SYM-AM-25-313 | - |
dc.relation.ispartofseries | ;SYM-AM-25-401 | - |
dc.subject | Large Language Models | en_US |
dc.subject | Test and Evaluation | en_US |
dc.subject | Government Acquisition | en_US |
dc.subject | Generative Artificial Intelligence | en_US |
dc.subject | Benchmarking | en_US |
dc.title | Test and Evaluation of Large Language Models to Support Informed Government Acquisition | en_US |
dc.type | Presentation | en_US |
dc.type | Technical Report | en_US |
Appears in Collections: | Annual Acquisition Research Symposium Proceedings & Presentations |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
SYM-AM-25-313.pdf | SYM Paper | 638.82 kB | Adobe PDF | View/Open |
SYM-AM-25-401.pdf | SYM Presentation | 918.16 kB | Adobe PDF | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.