Microservices have revolutionized application deployment in popular cloud platforms, offering flexible scheduling of loosely-coupled containers and improving operational efficiency. However, this transition made applications more complex, consisting of tens to hundreds of microservices. Efficient orchestration remains an enormous challenge, especially with emerging paradigms such as Fog Computing and novel use cases as autonomous vehicles. Also, multi-cluster scenarios are still not vastly explored today since most literature focuses mainly on a single-cluster setup. The scheduling problem becomes significantly more challenging since the orchestrator needs to find optimal locations for each microservice while deciding whether instances are deployed altogether or placed into different clusters. This paper studies the multi-cluster orchestration challenge by proposing a Reinforcement Learning (RL)-based approach for efficient microservice deployment in Kubernetes (K8s), a widely adopted container orchestration platform. The study demonstrates the effectiveness of RL agents in achieving near-optimal allocation schemes, emphasizing latency reduction and deployment cost minimization. Additionally, the work highlights the versatility of the DeepSets neural network in optimizing microservice placement across diverse multi-cluster setups without retraining. Results show that DeepSets algorithms optimize the placement of microservices in a multi-cluster setup 32 times higher than its trained scenario.

Efficient Microservice Deployment in Kubernetes Multi-Clusters through Reinforcement Learning

Zaccarini, Mattia;Poltronieri, Filippo;Tortonesi, Mauro;Stefanelli, Cesare;
2024

Abstract

Microservices have revolutionized application deployment in popular cloud platforms, offering flexible scheduling of loosely-coupled containers and improving operational efficiency. However, this transition made applications more complex, consisting of tens to hundreds of microservices. Efficient orchestration remains an enormous challenge, especially with emerging paradigms such as Fog Computing and novel use cases as autonomous vehicles. Also, multi-cluster scenarios are still not vastly explored today since most literature focuses mainly on a single-cluster setup. The scheduling problem becomes significantly more challenging since the orchestrator needs to find optimal locations for each microservice while deciding whether instances are deployed altogether or placed into different clusters. This paper studies the multi-cluster orchestration challenge by proposing a Reinforcement Learning (RL)-based approach for efficient microservice deployment in Kubernetes (K8s), a widely adopted container orchestration platform. The study demonstrates the effectiveness of RL agents in achieving near-optimal allocation schemes, emphasizing latency reduction and deployment cost minimization. Additionally, the work highlights the versatility of the DeepSets neural network in optimizing microservice placement across diverse multi-cluster setups without retraining. Results show that DeepSets algorithms optimize the placement of microservices in a multi-cluster setup 32 times higher than its trained scenario.
2024
9798350327939
Kubernetes
Microservices
Orchestration
Reinforcement Learning
Resource allocation
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in SFERA sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11392/2574890
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 3
  • ???jsp.display-item.citation.isi??? 2
social impact