Browsing by Author "Safavi-Naini, Reihaneh"
Now showing 1 - 20 of 20
Results Per Page
Sort Options
Item Open Access A View-Based Protection Model to Prevent Inference Attacks by Third-Party Extensions to Social Computing Platforms(2016-01-18) Ahmadinejad, Seyed Hossein; Fong, Philip W. L.; Fong, Philip W. L.; Safavi-Naini, Reihaneh; Locasto, Michael E.; Bauer, Mark; Debbabi, MouradThe recent significant growth in popularity of Social Network Systems (SNSs) raised serious concerns regarding user privacy. One of such concerns, called inference attacks, is the leakage of users' private information from their public information. This dissertation identifies a more dangerous type of inference attacks where users' private information is inferred by third-party extensions to SNS platforms. SNSs provide an Application Programming Interface (API) that could be used by third-party applications to access SNS user profiles, and in return provide some functionality for the users. Systematic inference of user inaccessible information by third-party extensions from the information accessible through the SNS APIs is called SNS API inference attacks. Due to the large number of users who subscribe to third-party extensions, even with a meager success rate, SNS API inference attacks could violate the privacy of millions of users. Moreover, SNS API inference attacks could be used as a building block for further security attacks (e.g., identification attacks). This work first evaluate the feasibility of SNS API inference attacks by conducting an experiment where sample inference algorithms will be developed and executed against enough number of real user profiles and then their success rate will be assessed. Next, a view-based protection model will be proposed for the purpose of preventing SNS API inference attacks. This model allows users to share a sanitized version of their profiles with extensions. Sanitizing transformations must be designed to preserve both privacy and usefulness of the user profiles. The proposed model has a theoretical framework that defines measures to evaluate the effectiveness of sanitizing transformations. The theoretical framework will be paired with an enforcement model to show how transformations can actually be designed and sanitize user profiles. The enforcement model will include a declarative language for articulating transformations. Moreover, the enforcement model will have a model of computation that can describe transformations and access queries. The proposed model of computation has enough expressive power and meets the required properties. Finally, the proposed model will be evaluated by assessing the correctness of the theoretical framework and the enforcement model.Item Open Access Contributions to Behavioral Authentication Systems(2021-01-27) Islam, Md Morshedul; Safavi-Naini, Reihaneh; Fong, Philip W. L.; Jacobson, Michael John; Yanushkevich, Svetlana N.; Mannan, MohammadBehavioral Authentication (BA) systems authenticate users through their behavioral characteristics. BA systems construct behavioral profiles of users from their well-designed activities, and store profiles in a profile database on the system. For a verification request, a verification algorithm evaluates the request by comparing the provided verification data with the stored profile. In this thesis, we identify a number of shortcomings of these systems that are motivated by the application of these systems in practice. We study these shortcomings and propose solutions to address each. We designed, implemented and evaluated an activity-based BA system for mobile devices that is used to evaluate our proposed systems, experimentally. In more details, we proposed a challenge-response based BA system named DAC (Draw A Circle) and later extended it to eDAC (extended DAC) to improve its accuracy and usability. In both systems, behavioral data are from users’ response to drawing challenge circles. Through extensive analysis and experiments, we chose a set of features that are non-shareable and non-emulatable, and developed a verification algorithm that can successfully authenticate users with overwhelming probability. We studied the effect of database size on verification error, and that verification error increases with the database size. We introduced the notion of scalability of BA systems that requires the error probability of the system to remain (almost) the same as profile database grows; proposed personalization of verification to achieve scalability. To estimate information in BA systems, we used Biometric Information (BI), and Biometric System Entropy (BSE), two different but related approaches used for information measure in biometric-based systems. We studied the applicability of these measures for BA systems. For cryptographic applications, we proposed BAVault, a fuzzy vault based on the profiles in BA systems that can protect a secret key (message) of reasonable length. BAVault ensures profile privacy, even when the key is known. For profile privacy in profile databases and privacy-preserving verification, we proposed a non-cryptographic approach that uses an efficient profile transformation called random projection, projects a profile (verification data) into a lower dimension space and ensures their privacy. The verification is done in the transformed domain using a similar verification algorithm. Finally, we show an attack on BA systems when the verification algorithm uses the outputs of the classifier for verification decision. To impersonate a user of the BA systems, the attacker will utilize the information leakage of the verification algorithm about the output of the classifier. In all the above cases, we implemented our proposed approach and evaluated their performance.Item Open Access Defending against Link Quality Routing Attacks in Wireless Sensor Networks(2011) Hegazy, Islam; Safavi-Naini, Reihaneh; Williamson, CareyItem Open Access DGMT: A Fully Dynamic Hash-Based Group Signature(2023-07) Erfanian Azad Soltan, Aylar; Safavi-Naini, Reihaneh; Yanushkevich, Svetlana; Henry, RyanWith the rapid development of quantum technologies, quantum-safe cryptographic schemes have found significant attention. Shor’s quantum algorithms for breaking discrete logarithm (DL) and integer factorization (IF) problems will bring depredation for currently used public key algorithms, including RSA (Rivest–Shamir–Adleman), DH (Diffie-Hellman) key agreement, DSA (Digital Signature Algorithm) and ECC (Elliptic Curve Cryptography), which their security depends on solving mathematical hard problems (IF and DL problem). Therefore, applications and protocols must evolve to be quantum secure in the presence of quantum computers. It is believed that symmetric encryptions and hash functions are resilient to attacks by a quantum adversary. Therefore, signature schemes have been modified so that their security relies on hash functions instead of the aforementioned mathematical hard problems. In this thesis, we consider post-quantum digital group signatures, whose security depends on one-way functions. In practice, these functions are replaced by cryptographic hash functions, which are expected to remain secure in the presence of quantum computers. A group signature is a signature scheme that allows a group member to anonymously sign messages on behalf of the group, while anonymity can be annulled by an opening authority or group manager. In fully dynamic group signature schemes, new members can be added to the group after the initialization phase and existing members can be revoked from the group. The primary aim of this thesis is twofold. Firstly, it aims to conduct a comprehensive analysis of various hash-based group signature schemes, taking into consideration their security models, properties, and limitations. Moreover, this thesis contributes to the development of a novel fully dynamic group signature scheme, named DGMT, which addresses the limitations of previous schemes, and we prove its security (unforgeability, anonymity, and traceability), and provide the complete implementation of the this scheme for the first time. Secondly, the thesis intends to review and evaluate various revocation methods with the objective of identifying potential ways to improve the revocation method utilized in the proposed scheme.Item Open Access Digital Rights Management(2011-08-31T20:46:03Z) Safavi-Naini, Reihaneh; Sheppard, Nicholas PaulDigital rights management allows information owners to control the use and dissemination of electronic documents via a machine-readable licence. Documents are distributed in a protected form such that they may only be used with trusted environments, and only in accordance with terms and conditions stated in the licence. Digital rights management has found uses in protecting copyrighted audio-visual productions, private personal information, and companies’ trade secrets and intellectual property. This chapter describes a general model of digital rights management together with the technologies used to implement each component of a digital rights management system, and desribes how digital rights management can be applied to secure the distribution of electronic information in a variety of contexts.Item Open Access A Framework for Expressing and Enforcing Purpose-Based Privacy Policies(2013-01-28) Jafari, Mohammad; Fong, Philip; Safavi-Naini, Reihaneh; Barker, KenPurpose is a key concept in privacy policies and has been mentioned in major privacy laws and regulations. Although some models have been proposed for enforcing purpose-based policies, little has been done in de ning formal semantics for purpose and therefore an e ective enforcement mechanism for policies has remained a challenge. In this paper, we develop a framework for formalizing and enforcing purpose-based privacy policies. Purpose is formally de ned as the dynamic situation of an action within the network of inter-related actions in the system. Accordingly, we propose a modal-logic language for formally expressing constraints about purposes of actions which can be used to model purpose-based policies. The semantics of this language are de ned over an abstract model of activities in the system which is directly derivable from business processes. Based on this formal framework, we discuss some properties of purpose and show how some well-known, as well as new forms of purpose constraints can be formalized using the proposed language. We also show how purpose-based constraints can be tied to other access control policies in the system. Finally, we present a model-checking algorithm for verifying whether a given state of the system complies with a given set of policies, followed by a discussion of how this can be used in an actual implementation of a purpose reference monitor.Item Open Access In-region Location Verification Using Distance Bounding(2016) Akand, Md Mamunur Rashid; Safavi-Naini, Reihaneh; Fong, Philip; Jacobson, Michael J. JrLocation-based services have grown rapidly in recent years. Determining the location of a service user, however, appears to come with two seemingly contradictory requirements: on one hand, the location information is needed to be accurate up to a certain level, and on the other hand, the user may want their location information to remain private. The first problem, determining the location of a user is challenging in untrusted settings. An untrusted user may always claim a false location to gain services from a service provider. Sometimes the user may want to claim a shorter distance, and other times a longer one, based on the requirements of the service provider. One of the prominent systems that is being used for the last two decades to validate such claims is distance bounding protocol. Distance (upper) bounding protocol is used to verify that a user (prover) is no farther than a given distance from the verifier. Distance upper bounding protocols have a number of applications, including: secure localization, secure location verification, authentication. A more recent work on distance bounding introduces the dual problem of verifying that a prover is no closer than a given distance from the verifier. The proposed protocol, which is named distance lower bounding, is applicable in scenarios where the privileges are provided to users located far away from the verifier. We propose Distance Range Bounding (DRB) as a new problem that requires a prover to show that its distance from the verifier is between two bounds: a given upper bound and a given lower bound. We provide a formal model that captures security requirements in this scenario. We design a distance range bounding protocol and prove its security with respect to our model. Our protocol is based on two secure distance bounding protocols: a distance upper bounding and a distance lower bounding protocol. We use the two protocols in a way that the new protocol preserves the security property of the original ones and has provable security in DRB model. The distance range obtained by our proposed protocol can be used in the localization algorithms. We use the protocol to design an efficient and robust in-region verification protocol, where the goal is to verify if a user is located within a bounded area/zone. An upfront requirement of location-based services is the users' desire not to reveal their exact location for privacy reasons. In-region location verification systems verify if a user is within a region. Our second contribution is the design of a privacy enhanced location verification system that uses an in-region location verification approach for a given policy area, and verifies whether a location claim is from within the area. The novelty of our work is to use distance range bounding to construct a pseudo-rectangle (P-rectangle) that provides the best coverage for the area, and verify the location claim with respect to the P-rectangle. We define the error in verification decision, and show that it can be reduced by subdividing the area and using multiple rectangles to cover it. We analyze the privacy of the system against an adversary who monitors the radio communication, and use it to infer the location of the prover, and provide methods of protecting against this attack. We discuss our results and propose directions for future research.Item Open Access New Notions of Secrecy and User Generated Randomness in Cryptography(2014-09-30) Alimomeni, Mohsen; Safavi-Naini, ReihanehRandomness plays a central role in computer science, in particular cryptography. Almost all cryptographic primitives depend crucially on randomness because randomness and unpredictability in secret keys provides the means for security. Usually one assumes that perfect randomness, a sequence of independently and uniformly distributed bits, are accessible to algorithms. This is a strong assumption. Physical sources of randomness are neither uniformly random, nor produce necessarily independent bits. Therefore, the aim of this thesis is to start from a realistic model of randomness, investigate notions of secrecy and their randomness requirements and finally find practical methods for generation of randomness that matches the requirements of cryptographic primitives. We consider a model of random source where the source output follows one distribution from a set of possible distributions, each with the property that the maximum probability of symbols is bounded and can not be arbitrarily close to 1. This model does not assume independence or uniformity of the output symbols, and is considered to be a realistic model of randomness. From this point, the thesis can be divided into two main parts: In the first part, considering various notions of information theoretic secrecy, a fundamental problem is to find the properties of randomness needed to achieve security in these notions. Traditional cryptographic protocols simply assume perfect randomness and build on this assumption. We explore the results that show secrecy can not be based on imperfect randomness that is not uniform or independent. Thus a line of work attempts to relax notions of secrecy in such a way that they can be constructed with non-perfect sources, and possibly require smaller key sizes. Yet they should match real life applications. An important work in this context is entropic security where the key size could be smaller than the message depending on message distribution. Inspired by this, we propose two relaxed notions of secrecy that are motivated by practical applications. In the first notion, motivated by an application in biometrics authentication, we propose guessing secrecy where the probability of guessing the message for a computational unbounded adversary with the best strategy remains the same when a ciphertext is given or when it is not. We compare the randomness requirements of guessing secrecy with stronger notions and show that in some cases such as key length, the requirements are the same. For key distribution however, we found a family of distributions that provide guessing secrecy but not perfect secrecy. In the second notion, we investigate randomness requirements of multiple message encryption. Considering a natural extension of secrecy definition to multiple messages, we show that independent keys are needed to encrypt each message. We then propose a relaxed notion in which the security of last message is more important than past messages, although the leakage of past messages is bounded using entropic security. By this assumption, we achieve smaller key length compared to indistinguishability, and comparable to key length for entropic security. This notion has applications such as location privacy. In the second part of the thesis, since secrecy crucially depends on perfect randomness, we investigate how perfect randomness can be practically generated, specifically from human game-play. Unlike many random number generators that assume independent or uniform random bits in the random source, we base our work on the realistic model of randomness. Our main observation is that human game-play has an element of randomness coming from the errors in their game-play which is the main entertaining factor of the game. We also observed that this game-play can distinguish among a group of people if the right features are collected from the game-play. We incrementally changed our game design until the distinguishability among a small population is maximized, and then run the experiments required to show the viability of this approach over a larger population. This approach can also provide a hard to delegate authentication property where a human could not emulate the behavior of another human even given statistical information about their game-play.Item Open Access Path Hopping: An MTD Strategy for Long-Term Quantum-Safe Communication(2018-05-07) Safavi-Naini, Reihaneh; Poostindouz, Alireza; Lisy, ViliamMoving target defense (MTD) strategies have been widely studied for securing computer systems. We consider using MTD strategies to provide long-term cryptographic security for message transmission against an eavesdropping adversary who has access to a quantum computer. In such a setting, today’s widely used cryptographic systems including Diffie-Hellman key agreement protocol and RSA cryptosystem will be insecure and alternative solutions are needed. We will use a physical assumption, existence of multiple communication paths between the sender and the receiver, as the basis of security, and propose a cryptographic system that uses this assumption and an MTD strategy to guarantee efficient long-term information theoretic security even when only a single path is not eavesdropped. Following the approach of Maleki et al., we model the system using a Markov chain, derive its transition probabilities, propose two security measures, and prove results that show how to calculate these measures using transition probabilities. We define two types of attackers that we call risk-taking and risk-averse and compute our proposed measures for the two types of adversaries for a concrete MTD strategy. We will use numerical analysis to study tradeoffs between system parameters, discuss our results, and propose directions for future research.Item Open Access Privacy Consensus in Anonymization Systems Via Game Theory(2012-03-01T18:26:47Z) Karimi Adl, Rosa; Askari, Mina; Barker, Ken; Safavi-Naini, ReihanehPrivacy protection appears as a fundamental concern when personal data is collected, stored, and published. Several anonymization methods have been proposed to protect individuals' privacy before data publishing. Each anonymization method has at least one parameter to adjust the level of privacy protection. Choosing a desirable level of privacy protection is a crucial decision because it affects the volume and usability of collected data differently. In this paper, we demonstrate how to use game theory to model different and conflicting needs of parties involved in making such decision. We describe a general approach to solve such games and elaborate the procedure using k-anonymity as a sample anonymization method. Our model provides a generic framework to find stable values for privacy parameters within each anonymization method, to recognize the characteristics of each anonymization method, and to compare different anonymization methods to distinguish the settings that make one method more appealing than the others.Item Open Access Random Number Generation using Human Gameplay(2016) Sharifian, Setareh; Safavi-Naini, ReihanehRandomness is one of the most important research areas in computer science and in particular, in cryptography. Security of almost all cryptosystems relies on random keys. Unfortunately, perfect sources of randomness are not easily accessible. However, True Random Number Generators (TRNGs) generate almost random strings, using non-perfect random sequences. A TRNG algorithm consists of an entropy source and an extractor. In this thesis, a TRNG is proposed in which a human player’s input in a two-player game is used as the entropy source and the random seed required by the extractor. This means that the proposed TRNG is only dependent on user’s inputs. The thesis contains the theoretical foundation of the approach, the design, and implementation of the corresponding game. To validate theories, we designed and implemented a game, and performed some user studies. The results of our experiments support the effectiveness of the proposed method in generating high-quality randomness.Item Open Access Secure Communication over Adversarial Channel(2015-07-24) wang, pengwei; Safavi-Naini, ReihanehDigital Communications play an important role in modern worlds, and it is crucial to consider the security problems associated with network communication. This dissertation explores the use of physical layer characteristics of communication channels and network multipath to build secure and reliable communication in adversarial setting. We investigate four ad- versarial models: limited view adversary channel, adversarial wiretap channel, adversarial wiretap channel with public discussion, and Secure Message Transmission. The first three models are about secure communication using physical layer characters, and the last is about using network multipath for communication. We first consider secure and reliable communication over a wiretap channel with an active adversary. We consider on adversarial channel model, in which the adversary is able to eavesdrop the communication between the sender and the receiver, and also corrupt the communication by adding adversarial noise. The model of limited view adversary focuses on reliable communication over this channel, and the construction of limited view adversary code achieves reliable communication in this setting. Adversarial wiretap channel studies secure and reliable transmission over this adversarial channel. We obtain an upper bound on the capacity of this channel, and construct an adversarial wiretap code that provides secure and reliable communication over this channel. By allowing communicants to have access to a public discussion channel, secure communication becomes possible over adversarial wiretap channel for a wider range of parameters. We then consider on Secure Message Transmission in networks. We propose a new construction for secure message transmission protocols using a list decodable code and a message authentication code. Our protocol has optimal transmission rate and provides the highest reliability among all known comparable protocols.Item Open Access A Secure Electronic Healthcare Record Infrastructure in the Digital Rights Management Model(2009-12-02T17:10:04Z) Sheppard, Nicholas; Safavi-Naini, Reihaneh; Jafari, MohammadElectronic healthcare record systems promise to increase the efficiency and effectiveness of healthcare systems by ensuring that healthcare workers can get timely access to the correct and complete information that they require in order to provide good health services to their patients. Electronic healthcare systems have been investigated in many countries, and numerous research journals and conferences are devoted to their design and evaluation. Greater distribution of information through an electronic healthcare system brings with it a risk that patients’ information will be misused, resulting in invasions of privacy and/or unfair discrimination on the basis of patients’ medical histories. Security and privacy therefore forms an important part of any electronic healthcare system, and numerous designs for security and privacy in the healthcare space have been proposed over the years [4, 5, 10, 15, 18, 19, 20, 21, 23, 43, 45, 50]. Systems for controlling access to sensitive information, both in a healthcare context and others, are typically designed to enforce the principle of least privileges, that is, the principle that the human users of a system should have access to the minimum amount of information required to carry out their assigned job. This principle aims to minimise the potential for information to misused, without interfering with people’s ability to do their jobs. In a privacy context, the principle of consent is widely used in privacy law to restrict the disclosure of sensitive information according to the wishes of the subject of that information. Electronic consent (often shortened to “e-consent”), in particular, allows the subject of some electronic information to permit or deny the disclosure of that information to particular people in particular circumstances [12]. Electronic consent systems have been proposed as a method of controlling the disclosure of electronic healthcare records [3, 34, 35, 44, 49, 53], and (less frequently) for other kinds of personal information in electronic commerce contexts [6, 25, 28]. Electronic consent systems bear some resemblance to digital rights management systems. Digital rights management is best known for its use in the protection of intellectual property [31], but more recently has also been applied to the protection of personal information [26, 47]. Digital rights management technology allows information owners to control the distribution and use of their information by describing a 4 policy in a machine-readable licence. Information is distributed in a protected form such that it can only be accessed by special DRM agents that are trusted to comply with the terms specified licences. Petkovi´c, et al. examine the potential for digital rights management technology in securing electronic healthcare records [40]. They argue that digital rights management technologies already provide many of the features desired in a secure electronic healthcare system, in that they can provide persistent and homogeneous protection of information even when it is disseminated throughout a distributed healthcare system. However, they additionally identify a number of points on which existing digital rights management systems (specifically, those originally designed for managing the distribution of sensitive documents within corporate enterprises) do not meet these needs, including: the parties that access and manipulate documents may come from many different domains and it is difficult to predict in advance who these parties might be; the ownership of data is not clearly defined, as it is shared between healthcare workers and patients; access rights are highly context-dependent and are difficult to determine automatically (for example, is a request an emergency?); small fragments of records (and not just whole documents, as is usually the case in intellectual property protection) may be critical; the membership of roles can change very quickly; healthcare data may be used for research purposes in an anonymised form; and healthcare data is prone to numerous inference channels. In the present document, we describe one possible implementation of a secure electronic healthcare infrastructure modelled on the digital rights management approach to privacy protection [26, 47] and workflow-based access control [2, 24, 45]. Our proposal attempts to address several of the points identified by Petkovi´c, et al., as well as other issues identified by our own research. While many of the features of the proposed system could also be provided by an access control system and/or electronic consent system such as those proposed in earlier work, the proposed system additionally allows for persistent protection of information throughout the global electronic healthcare record infrastructure, local healthcare facilities and mobile healthcare workers; highly expressive consent directives that can be enforced in an automated fashion; and information flows that cross organisational boundaries. 5 Anonymisation and inference channels may additionally be addressed by other work in the iCore Information Security Lab. In addition to our general application of digital rights management in a healthcare context, we introduce some new techniques with wider applications in digital rights management and access control, including the use of workflow information to provide fine control over the purposes for which rights-managed data is used; and the ability to transfer the execution of a task from one device to another (known as session mobility [46]) within the confines of a digital rights management system.Item Open Access Secure Smart Contract-based Computation (Verifiable computation, Fair two-party protocols, and Resource sharing)(2024-02-15) Avizheh, Sepideh; Safavi-Naini, Reihaneh; Jacobson Jr., Michael John; Henry, Ryan; Far, Behrouz; Pieprzyk, JosefA smart contract is a trusted computer program that runs on the decentralized computer that underlies a blockchain. Smart contracts are part of Web3 technology, the next generation of the Internet, and they enable distributed applications over the Internet. Smart contracts can remove intermediaries in real-world systems and have the promise of revolutionizing industries and processes in healthcare, retail, banking, government, and many more. Because of their trusted execution, smart contracts have been used as a trusted third party/referee in cryptographic protocols. They have also been used to automate processes and seamlessly incorporate cryptocurrency in payments. In this thesis, we consider the application of the smart contract as a trusted (semi-honest) third party/referee in the following problems: verifiable computation using refereed delegation of computation, fair two-party protocols that include fair private set intersection and fair exchange, and resource sharing. We show that the direct replacement of the trusted third party/referee with the smart contract can expose the protocols to new threats and attacks. We model the security of each protocol, analyze the security of the existing ones, propose new protocols that can achieve the required security guarantee in the smart contract setting, provide a proof-of-concept implementation, and evaluate their performance. Our results incorporate both secure smart contract-based cryptographic protocols and systems. We mainly propose formal models and descriptions in the real-world/ideal-world paradigm for the cryptographic protocols. We also look into the privacy in the smart contract setting. The smart contract is transparent and interactions with the smart contract are through public communication channels, thus ensuring the privacy of the parties' input and messages becomes a significant challenge in designing the protocols. We lay the foundations to define and capture privacy for a smart contract and use it to show the privacy of our cryptographic protocols.Item Open Access Social Control and Interactivity in Anonymous Public Events(2024-09-05) Rahman, Mushfekur; Fong, Philip W. L.; Fong, Philip W. L.; Safavi-Naini, Reihaneh; De Carli, LorenzoOnline event hosting platforms, such as Zoom Meetings and Twitch Streams, have revolutionized the way we socialize with one another. These platforms offer a rich set of interactive features such as live chat and gestures, enabling dynamic and engaging social events. In public events, however, participants are not well-known entities originating from the same institution, and thus traditional access control fails to provide means for maintaining order without disrupting interactivity. Zoombombing and cyberbullying in Twitch streams are symptoms of this dilemma. The design of the aforementioned event hosting systems thus resort to social control mechanisms that allow moderators to monitor the social interactions of the participants and respond to disorderly behavior in real time. The designer of an event hosting system needs to make sure that social control mechanisms preserve interactivity expectations. In this work, we introduce HIPE (Highly Interactive Public Event), a framework for modelling social control mechanisms, articulating interactivity expectations, as well as verifying if social control interferes with interactivity. We catalogued four classes of social control mechanisms that can be reused in the design of event hosting systems, namely sanction, remedy, containment, and retaliation. Additionally, we formulated a 2-safety hyperproperty characterization of interactivity called (a, p)-interactivity and a stricter version, strong (a, p)-interactivity, to express the degree of interactivity expected of an event hosting system. Furthermore, we designed model checking algorithms for verifying both (a, p)-interactivity and strong (a, p)-interactivity. An empirical case study has been conducted to illustrate the interplay between social control and interactivity, as well as to evaluate the performance of our model checking algorithm. To the best of our knowledge this is the first work to formally study the balancing of social control and interactivity in anonymous public events.Item Open Access Users Tracing in Online Text Systems(2016) Le, Hoi; Safavi-Naini, ReihanehPrivacy for online systems including social networks, specialized websites such as reviewing systems, movies forums, etc. have become primary concern people who use these websites. Users of these websites must register accounts and input personal information, which maybe directly related to their identities. Their reviews, tweeting, comments, or chat messages provide more information about them through their writing characteristics. This threatened to reveal their identities and other personal information. A patient's records need to be accessible for research purposes or be provided to a third party. The user’s identity or health status must remain protected. Current methods provide more tools to eliminate portions of text in the records that can be used to infer those sensitive information. We provide a new approach to select parts of the text that must be removed. The novelty of this approach is using information theoretic measures to capture the definition of sensitive inference. Using this approach we almost double the number of detected inferences compared to the existing state-of-the-art systems. Human characteristics such as writing characteristics can be used to identify them with more information. This information can be used to trace users' activities across websites by performing writing style matching. To protect users from being traced, obfuscating their writing styles is necessary. However this is not an easy-to-accomplish task. In this thesis, we will show that there are security flaws in current works and design a writing style obfuscation algorithm which has a number of important security properties. As stylometry techniques have been expanded to new domains such as tweets, comments, chat messages and codes, the same privacy concerns exist in both traditional and new domains. Number of challenges exist such as authors can be traced or identified across domains. We have analysed privacy of multi-user Twitter accounts, and showed that authors can be recognized using data from other domains such as blogs.Item Open Access Utility of Knowledge Discovered from Sanitized Data(2008-09-30) Sramka, Michal; Safavi-Naini, Reihaneh; Denzinger,Jorg; Askari, Mina; Gao, JieWhile much attention has been paid to data sanitization methods with the aim of protecting users’ privacy, far less emphasis has been put to the usefulness of the sanitized data from the view point of knowledge discovery systems. We consider this question and ask whether sanitized data can be used to obtain knowledge that is not defined at the time of the sanitization. We propose a utility function for knowledge discovery algorithms, which quantifies the value of the knowledge from a perspective of users of the knowledge. We then use this utility function to evaluate the usefulness of the extracted knowledge when knowledge building is performed over the original data, and compare it to the case when knowledge building is performed over the sanitized data. Our experiments use an existing cooperative learning model of knowledge discovery and medical data, anonymized and perturbed using two widely known sanitization techniques, called E-differential privacy and k-anonymity. Our experimental results show that although the utility of sanitized data can be drastically reduced and in some cases completely lost, there are cases where the utility can be preserved. This confirms our strategy to look at triples consisting of a utility function, a sanitization mechanism, and a knowledge discovery algorithm that are useful in practice. We categorize a few instances of such triples based on usefulness obtained from experiments over a single database of medical records. We discuss our results and show directions for future work.Item Open Access Utilizing Human Cognitive Abilities in User Identification and CAPTCHA(2015-01-21) Galib, Asadullah Al; Safavi-Naini, ReihanehSecure authentication is necessary for everyday applications, such as logging in to a personal computer, making a financial transaction or boarding an aircraft. We present a novel approach to user authentication in which biometric data related to human cognitive processes, in particular visual search, working memory and priming effect on automatic processing, are captured and used to identify users. Our proposed system uses a carefully designed Cognitive Task (CT) that is presented to the user as a game, in order to capture a “cognitive signature” of the user. Our empirical results support the hypothesis that the captured cognitive signatures can identify users across different platforms. Our system provides a proof-of-concept for cognitive-based biometric authentication. We validate the robustness of our system against impersonation attack by experienced users, and show that it is hard to reproduce the cognitive signature by mimicking users' gameplay. Ensuring that the access to a system is performed only by a human rather than a computer program or bot is another important security concern in online services. We propose a new approach to Captcha which estimates human cognitive ability, in particular visual search ability, to differentiate humans from computers. We refer to this Captcha as Movtcha (Matching Objects by Visual Search To Tell Computers and Humans Apart). The design of Movtcha takes into account the analysis of human behavior to minimize noise during cognitive feature estimation. Our empirical results suggest that Movtcha can provide accuracy and usability comparable to other established Captchas. We show that Movtcha is resistant against random, automated, inference and static relay attacks. Our system is suitable for large scale applications since image selection, challenge generation and response evaluation are automated. Movtcha surpasses language and experience barriers by presenting both challenge and response in clear form and therefore can be used by people all across the world.Item Open Access Weighted Raft and its Application to Geographically Distributed Servers(2018-09-17) Liu, Xi; Safavi-Naini, Reihaneh; Safavi-Naini, Reihanah; Fong, Philip W. L.; Reardon, JoelModern application are usually deployed in a distributed architecture where several servers are involved in providing service to users. Distributed systems offer fault tolerance and increased availability. In some cases, the servers are distributed in different geographic locations to reduce latency and provide better service to end users. Keeping data consistent across servers is a fundamental problem. Raft consensus algorithm has been prepared to achieve data consistency in the presence of server failure. In Raft, servers elect a leader server that manages client-side requests and updates the data files. Each server has the same chance to becoming a leader. Although typical Raft servers are in a local cluster, in many natural applications, Raft can be used in a geographically distributed setting to ensure data consistency. In such a setting, it is natural to allow the “closest” server to a user to respond to the user request. The notation of “closeness” could capture not only geographic location but also network connectivity and capacity. Motivated by the above challenge, we propose Geo-Raft which is the application of Raft protocol in geographically distributed systems. To reduce the performance cost, we add weights to servers’ election timeout to enable the most suitable server to be elected as the leader and serve the users. We propose a model to optimize Geo-Raft performance and present an approach to determine Raft timeouts for a given set of weights. We discuss our results and directions for future work.Item Open Access A Workflow Reference Monitor for Enforcing Purpose-Based Policies(2013-09-25) Jafari, Mohammad; Denzinger, Joerg; Safavi-Naini, Reihaneh; Barker, KenPurpose is a key concept in privacy policies. Based on the purpose framework developed in our earlier work [11] we present an access control model for a work ow-based information system in which a work ows reference monitor ( WfRM ) enforces purpose-based policies. We use a generic access control policy language and show how it can be connected to the purpose modal logic language ( PML ) to link purpose constraints to access control rules and how such policies can be enforced. We also present a simple implementation of such a reference monitor based on extending eXtensible Access Control Markup Language( XACML ), a commonly used access control open standard.