Investigations into the Performance and Scalability of Software Systems

Date
2019-09-19
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
This research explores three distinct problems related to the performance and scalability of software systems. The first two problems have the overarching goal of increasing the effective utilization of multicore hardware used to deploy latency sensitive applications. Specifically, I first explore how the multicore hardware hosting a Web server can be utilized effectively while still satisfying acceptable user response times. In the second problem, I study the design of a benchmarking testbed that utilizes multicore hardware to emulate large scale Web of Things (WoT) deployments. The key challenge here is to emulate a large number of WoT devices on the hardware without violating the integrity of test results due to contention for testbed resources. The third problem I studied was motivated by the large number of experiments triggered by my first two studies. In performance evaluation studies such as those presented in this study, practitioners often need to consider how a large number of independent variables, i.e., configuration parameters, impact dependent variables, e.g., response time. Naive experiment selection techniques can increase experimentation effort without necessarily providing more insights on the performance behaviour of the system. I investigate an intelligent experiment selection technique to address this problem. I show that, with the right configuration strategy, a modern multicore server can be utilized up to 80% while maintaining a desired response time performance. However, in contrast to existing studies, the best strategy depends on the server workload. Using detailed hardware counter measurements, I characterize the relationship between workload, shared micro-architectural hardware resources, and scalability. In the context of a WoT emulation testbed, I show how contention for shared hardware resources can impact the integrity of test results. In contrast to similar testbeds, I design a contention detection module that can help testers explicitly recognize such contention during large scale WoT performance evaluation exercises. Finally, I develop an experiment selection technique called IRIS. IRIS exploits approximate knowledge of the performance behaviour of a system to determine how best to place the next experiment point in the independent variable space. I show that IRIS outperforms techniques such as equal distance experiment point selection.
Description
Keywords
Citation
Hashemian, R. (2019). Investigations into the Performance and Scalability of Software Systems (Doctoral thesis, University of Calgary, Calgary, Canada). Retrieved from https://prism.ucalgary.ca.