Browsing by Author "Arlitt, Martin F."
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Open Access A Campus-Level View of DNS Traffic(2019-07-26) Zhang, Zhengping; Williamson, Carey L.; Arlitt, Martin F.; Williamson, Carey L.; Arlitt, Martin F.; Ghaderi, Majid; Aycock, JohnThis thesis presents a characterization study of DNS traffic within the University of Calgary edge network. The traffic analysis is based on a one-week period of observation (from September 3, 2018 to September 9, 2018). We study the two directions (outbound and inbound) of the DNS traffic, representing the two roles that the campus plays in the DNS architecture, namely a user and a service provider. We selectively analyzed the traffic of a few campus DNS servers. In addition, we also examine several DNS-related anomalies. The measurement results show that a significant proportion of DNS messages come from misconfigurations or answers with short TTLs, which can both be improved to reduce the DNS traffic volume.Item Open Access Investigations into the Performance and Scalability of Software Systems(2019-09-19) Hashemian, Raoufehsadat; Krishnamurthy, Diwakar; Arlitt, Martin F.; Wang, Mea; Wang, Xin; Far, Behrouz Homayoun; Chandra, AbhishekThis research explores three distinct problems related to the performance and scalability of software systems. The first two problems have the overarching goal of increasing the effective utilization of multicore hardware used to deploy latency sensitive applications. Specifically, I first explore how the multicore hardware hosting a Web server can be utilized effectively while still satisfying acceptable user response times. In the second problem, I study the design of a benchmarking testbed that utilizes multicore hardware to emulate large scale Web of Things (WoT) deployments. The key challenge here is to emulate a large number of WoT devices on the hardware without violating the integrity of test results due to contention for testbed resources. The third problem I studied was motivated by the large number of experiments triggered by my first two studies. In performance evaluation studies such as those presented in this study, practitioners often need to consider how a large number of independent variables, i.e., configuration parameters, impact dependent variables, e.g., response time. Naive experiment selection techniques can increase experimentation effort without necessarily providing more insights on the performance behaviour of the system. I investigate an intelligent experiment selection technique to address this problem. I show that, with the right configuration strategy, a modern multicore server can be utilized up to 80% while maintaining a desired response time performance. However, in contrast to existing studies, the best strategy depends on the server workload. Using detailed hardware counter measurements, I characterize the relationship between workload, shared micro-architectural hardware resources, and scalability. In the context of a WoT emulation testbed, I show how contention for shared hardware resources can impact the integrity of test results. In contrast to similar testbeds, I design a contention detection module that can help testers explicitly recognize such contention during large scale WoT performance evaluation exercises. Finally, I develop an experiment selection technique called IRIS. IRIS exploits approximate knowledge of the performance behaviour of a system to determine how best to place the next experiment point in the independent variable space. I show that IRIS outperforms techniques such as equal distance experiment point selection.