Microservices have revolutionized application deployment in popular cloud platforms, offering flexible scheduling of loosely-coupled containers and improving operational efficiency. However, this transition made applications more complex, consisting of tens to hundreds of microservices. Efficient orchestration remains an enormous challenge, especially with emerging paradigms such as Fog Computing and novel use cases as autonomous vehicles. Also, multi-cluster scenarios are still not vastly explored today since most literature focuses mainly on a single-cluster setup. The scheduling problem becomes significantly more challenging since the orchestrator needs to find optimal locations for each microservice while deciding whether instances are deployed altogether or placed into different clusters. This paper studies the multi-cluster orchestration challenge by proposing a Reinforcement Learning (RL)-based approach for efficient microservice deployment in Kubernetes (K8s), a widely adopted container orchestration platform. The study demonstrates the effectiveness of RL agents in achieving near-optimal allocation schemes, emphasizing latency reduction and deployment cost minimization. Additionally, the work highlights the versatility of the DeepSets neural network in optimizing microservice placement across diverse multi-cluster setups without retraining. Results show that DeepSets algorithms optimize the placement of microservices in a multi-cluster setup 32 times higher than its trained scenario.
Efficient Microservice Deployment in Kubernetes Multi-Clusters through Reinforcement Learning
Zaccarini, Mattia;Poltronieri, Filippo;Tortonesi, Mauro;Stefanelli, Cesare;
2024
Abstract
Microservices have revolutionized application deployment in popular cloud platforms, offering flexible scheduling of loosely-coupled containers and improving operational efficiency. However, this transition made applications more complex, consisting of tens to hundreds of microservices. Efficient orchestration remains an enormous challenge, especially with emerging paradigms such as Fog Computing and novel use cases as autonomous vehicles. Also, multi-cluster scenarios are still not vastly explored today since most literature focuses mainly on a single-cluster setup. The scheduling problem becomes significantly more challenging since the orchestrator needs to find optimal locations for each microservice while deciding whether instances are deployed altogether or placed into different clusters. This paper studies the multi-cluster orchestration challenge by proposing a Reinforcement Learning (RL)-based approach for efficient microservice deployment in Kubernetes (K8s), a widely adopted container orchestration platform. The study demonstrates the effectiveness of RL agents in achieving near-optimal allocation schemes, emphasizing latency reduction and deployment cost minimization. Additionally, the work highlights the versatility of the DeepSets neural network in optimizing microservice placement across diverse multi-cluster setups without retraining. Results show that DeepSets algorithms optimize the placement of microservices in a multi-cluster setup 32 times higher than its trained scenario.I documenti in SFERA sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.