OPTIMIZING COMPUTATIONAL EFFICIENCY AND SECURITY-BY-DESIGN CODE SECURITY IN LARGE LANGUAGE MODEL PIPELINES: COMPARATIVE ANALYSIS OF INFERENCE COSTS

Authors

  • Tagleorge Silveira
  • Pedro Pinheiro
  • Hélder Rodrigo Pinto
  • Salviano Pinto Soares
  • José Baptista

Keywords:

LLMs, Energy Efficiency, Cybersecurity, Security-by-design, Optimization

Abstract

Large Language Models (LLMs) have revolutionized software engineering by generating code rapidly and accurately across a wide range of programming tasks. However, the growing reliance on these models raises concerns regarding their energy consumption, runtime overhead, and the efficiency of achieving successful, security-by-design, and maintainable code. This article presents an analytical comparison of several leading LLMs—such as OpenAI’s GPT series, Anthropic’s Claude, Meta’s Llama, and Google’s Gemini—by evaluating their success rates in producing secure and optimized code, the number of prompts required for successful output, and the corresponding computational and energy costs. The findings emphasize strategies to balance accuracy, performance, and sustainability in LLM-assisted programming.

DOI: https://doi.org/10.56238/sevened2025.036-117

 

Downloads

Published

2025-12-17

How to Cite

Silveira, T., Pinheiro, P., Pinto, H. R., Soares, S. P., & Baptista, J. (2025). OPTIMIZING COMPUTATIONAL EFFICIENCY AND SECURITY-BY-DESIGN CODE SECURITY IN LARGE LANGUAGE MODEL PIPELINES: COMPARATIVE ANALYSIS OF INFERENCE COSTS. Seven Editora, 2309-2324. https://sevenpubl.com.br/editora/article/view/8763