Browsing by Author "MirzaEbrahim Mostofi, Vahid"
Now showing 1 - 1 of 1
Results Per Page
Sort Options
Item Open Access Auto-Scaling Containerized Microservice Applications(2021-09) MirzaEbrahim Mostofi, Vahid; Krishnamurthy, Diwakar; Arlitt, Martin; Drew, Steve; Medeiros de Souza, RobertoThe microservices architecture is being increasingly used to build complex applications. Many such applications are customer-facing. Hence, they face workload fluctuations and need to respond to end user requests quickly in spite of such fluctuations. Furthermore, application owners typically prefer to allocate resources efficiently using containerization technology so that operational costs are kept low. These two requirements are typically implemented within an auto-scaler module. The third requirement, unique to microservice applications, is the need for owners to roll out updates in an agile and frequent manner. Hence, an auto-scaler appropriate for microservices should be designed to support this requirement. Unfortunately, current auto-scaling techniques do not satisfy these three key requirements simultaneously. I develop a novel auto-scaler called TRIM that addresses this open issue. TRIM exploits properties of real-life microservice workloads. Specifically, my analysis of a large dataset consisting of $24,000$ production microservice applications reveals a novel insight that a small number of workload patterns are encountered frequently over any given time period. Furthermore, these popular patterns tend to be popular during a subsequent time period. TRIM pre-computes resource allocations for these small number of popular patterns quickly and re-uses these allocations at runtime when appropriate. I develop MOAT, a novel heuristic optimization technique that ensures that the pre-computed allocations satisfy response time targets efficiently. Using a variety of analytical, on-premise, and public cloud systems, I show that MOAT and TRIM outperform state-of-the-art baselines in terms of all three requirements described previously. For example, I consider an on-premise system subjected to a $24$ hour workload derived from a real-life microservice application. By quickly pre-computing resource allocations for just 5 popular workload patterns in this workload, TRIM achieves up to 70% lesser response time violations and up to 20% reduced costs compared to an industry-standard auto-scaling technique.