Please use this identifier to cite or link to this item:
https://dair.nps.edu/handle/123456789/5078
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Cullen Tores | - |
dc.date.accessioned | 2024-05-29T19:48:05Z | - |
dc.date.available | 2024-05-29T19:48:05Z | - |
dc.date.issued | 2024-05-29 | - |
dc.identifier.citation | Published--Unlimited Distribution | en_US |
dc.identifier.uri | https://dair.nps.edu/handle/123456789/5078 | - |
dc.description | Symposium Student Poster | en_US |
dc.description.abstract | Assessment of Large Language Models’ (LLM) ability to automate classification of acquisition proposals as either competitive or noncompetitive. •This classification aims to establish a faster, more consistent, and objective evaluation system when compared to human assessment. •Three different prompt engineering strategies were used and compared against one another. •Interaction with the LLM was conducted via R programming and OpenAI application programming interface—not the standard graphical user interface. | en_US |
dc.description.sponsorship | Acquisition Research Program | en_US |
dc.language.iso | en_US | en_US |
dc.publisher | Acquisition Research Program | en_US |
dc.relation.ispartofseries | Acquisition Management;SYM-AM-24-188 | - |
dc.subject | Student Poster | en_US |
dc.title | Evaluating SBIR Proposals: A Comparative Analysis using Artificial Intelligence and Statistical Programming in the DoD Acquisitions Process | en_US |
dc.type | Presentation | en_US |
Appears in Collections: | Annual Acquisition Research Symposium Proceedings & Presentations |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
SYM-AM-24-188.pdf | Student Poster | 473.77 kB | Adobe PDF | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.