EconPapers    
Economics at your fingertips  
 

Cloud-Based Large Language Model Deployment: A Comparative Analysis of Serverless and Bring-Your-Own-Container Architectures

Mateusz Ploskonka

Acta Informatica Pragensia, vol. preprint

Abstract: Background: Large Language Models (LLMs) have transformed research and industry applications; however, cloud deployment decisions remain complex and poorly documented, particularly for academic researchers operating under budget constraints. Systematic guidance on infrastructure selection for LLM-based research is limited.Objective: This study provides a comprehensive empirical evaluation of cloud-based LLM deployment architectures, examining inference efficiency, serverless platform availability, and architectural trade-offs across major cloud providers to deliver actionable guidance for budget-constrained researchers.Methods: The author evaluated 32 open-source LLMs ranging from 0.6 billion to 1 trillion parameters across serverless and Bring Your Own Container (BYOC) deployment configurations. Using the Belebele benchmark, we analyzed cost-efficiency relationships, serverless platform availability, and metrics exposure across Amazon SageMaker, Amazon Bedrock, Azure Serverless, and Hugging Face-compatible providers.Results: Model performance follows a logarithmic scaling relationship with parameter count (R 2 =0.727) and deployment cost (R 2 =0.639). Models in the 30-50B parameter range achieve 85-90% of maximum accuracy at a fraction of the cost of frontier models. However, serverless availability remains fragmented: only 34.4% of examined models are accessible via serverless endpoints, with minimal cross-platform redundancy (6.2%). Deployment architecture introduces a fundamental trade-off: serverless platforms expose 71% fewer metrics than BYOC approaches while eliminating infrastructure management overhead and idle costs.Conclusion: These findings provide practical guidance for researchers selecting cloud infrastructure under budget constraints. Models in the 7-14B range offer optimal cost efficiency, while the 30-50B range maximizes accuracy per dollar for demanding tasks. The results also challenge the prevailing emphasis on ever-larger models, as diminishing returns become substantial beyond 30B parameters. Persistent gaps in serverless availability and observability highlight the need for greater standardization in cloud platforms.

Keywords: LLMs; Cloud computing; Serverless architecture; Cost optimization; Performance evaluation; Model deployment; Infrastructure selection (search for similar items in EconPapers)
References: Add references at CitEc
Citations:

Downloads: (external link)
http://aip.vse.cz/doi/10.18267/j.aip.313.html (text/html)
free of charge

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:prg:jnlaip:v:preprint:id:313

Ordering information: This journal article can be ordered from
Redakce Acta Informatica Pragensia, Katedra systémové analýzy, Vysoká škola ekonomická v Praze, nám. W. Churchilla 4, 130 67 Praha 3
http://aip.vse.cz

DOI: 10.18267/j.aip.313

Access Statistics for this article

Acta Informatica Pragensia is currently edited by Editorial Office

More articles in Acta Informatica Pragensia from Prague University of Economics and Business Contact information at EDIRC.
Bibliographic data for series maintained by Stanislav Vojir ().

 
Page updated 2026-04-27
Handle: RePEc:prg:jnlaip:v:preprint:id:313