OPTIMIZING COMPUTATIONAL EFFICIENCY AND SECURITY-BY-DESIGN CODE SECURITY IN LARGE LANGUAGE MODEL PIPELINES: COMPARATIVE ANALYSIS OF INFERENCE COSTS
Keywords:
LLMs, Energy Efficiency, Cybersecurity, Security-by-design, OptimizationAbstract
Large Language Models (LLMs) have revolutionized software engineering by generating code rapidly and accurately across a wide range of programming tasks. However, the growing reliance on these models raises concerns regarding their energy consumption, runtime overhead, and the efficiency of achieving successful, security-by-design, and maintainable code. This article presents an analytical comparison of several leading LLMs—such as OpenAI’s GPT series, Anthropic’s Claude, Meta’s Llama, and Google’s Gemini—by evaluating their success rates in producing secure and optimized code, the number of prompts required for successful output, and the corresponding computational and energy costs. The findings emphasize strategies to balance accuracy, performance, and sustainability in LLM-assisted programming.
DOI: https://doi.org/10.56238/sevened2025.036-117
Downloads
Published
Issue
Section
License

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.