Please use this identifier to cite or link to this item: https://dair.nps.edu/handle/123456789/5078
Full metadata record
DC FieldValueLanguage
dc.contributor.authorCullen Tores-
dc.date.accessioned2024-05-29T19:48:05Z-
dc.date.available2024-05-29T19:48:05Z-
dc.date.issued2024-05-29-
dc.identifier.citationPublished--Unlimited Distributionen_US
dc.identifier.urihttps://dair.nps.edu/handle/123456789/5078-
dc.descriptionSymposium Student Posteren_US
dc.description.abstractAssessment of Large Language Models’ (LLM) ability to automate classification of acquisition proposals as either competitive or noncompetitive. •This classification aims to establish a faster, more consistent, and objective evaluation system when compared to human assessment. •Three different prompt engineering strategies were used and compared against one another. •Interaction with the LLM was conducted via R programming and OpenAI application programming interface—not the standard graphical user interface.en_US
dc.description.sponsorshipAcquisition Research Programen_US
dc.language.isoen_USen_US
dc.publisherAcquisition Research Programen_US
dc.relation.ispartofseriesAcquisition Management;SYM-AM-24-188-
dc.subjectStudent Posteren_US
dc.titleEvaluating SBIR Proposals: A Comparative Analysis using Artificial Intelligence and Statistical Programming in the DoD Acquisitions Processen_US
dc.typePresentationen_US
Appears in Collections:Annual Acquisition Research Symposium Proceedings & Presentations

Files in This Item:
File Description SizeFormat 
SYM-AM-24-188.pdfStudent Poster473.77 kBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.