Browsing by Author "Behjat, Laleh"
Now showing 1 - 20 of 63
Results Per Page
Sort Options
Item Open Access A Comprehensive Capacity Expansion Planning Model for Highly Renewable Integrated Power Systems(2023-05-12) Parvini, Zohreh; Behjat, Laleh; Fapojuwo, Abraham; Moshirpour, MohammadDue to the depletion of conventional energy and environmental concerns, the trend toward increasing integration of renewable energy resources brings new challenges to power system planning and operation. The fluctuation of renewable energy resources is the main concern of system planners for their efficient deployment. Incorporating a more precise and detailed model of system constraints is inevitable to deal with these resources' intermittent and volatile nature. However, due to the various aspects and computation complexity of the capacity expansion problem, it is vital to have a thorough understanding of the most affecting constraints on the system planning. The unique characteristics of power systems, along with the integration of renewable energy resources and modern technologies such as energy storage, require developing a profound model for planning future infrastructure based on the available data. The primary objective of this research is to investigate and evaluate various aspects of power systems and develop a comprehensive capacity expansion model utilizing linear optimization techniques. The thesis includes the development of a data set for long-term planning purposes, a co-optimization expansion planning (CEP) model for identifying optimal transmission and generation expansion, modeling of storage technology and reserves, and reducing the network size to ensure model tractability. The framework was designed to facilitate the seamless integration of renewable energy sources and improve the performance of the whole power system, ensuring a smooth transition towards a high-renewable energy future. This tool intends to provide system planners and stakeholders in the generation and transmission sectors insights into future realizations of high-renewable power systems. The model can also be used as a benchmark for future planning studies and adjusted for any possible future assumptions.Item Open Access A Computer-Aided Design Assistant Tool for Elementary Linear Circuit Topologies(2012-12-06) Shahhosseini, Delaram; Belostotski, Leonid; Behjat, LalehIn this thesis, a CAD tool called analog design assistant (ADA), is developed to help analog circuit designers find new circuit topologies. First, a methodology to automatically generate all analog circuit topologies containing two or three transistors is developed. For each topology, circuit characteristics, such as DC voltage gain, are calculated. The DC voltage gain of each generated circuit is maximized by formulating and solving an optimization problem. After solving the optimization problem, it is shown that over 5,000 out of 56,000 circuits can achieve a DC voltage gain higher than 1. All generated circuit topologies and corresponding characteristics are stored in a database. A GUI is developed to help analog circuit designers search the database and find new topologies. In order to demonstrate the capability of ADA in generating new topologies, a previously unknown high-gain amplifier is selected, and designed in a 0.13-um standard CMOS technology.Item Open Access A fast, congestion based concurrent global routing technique(2006) Chiang, Andy; Behjat, LalehItem Open Access A Machine Learning Predictor and Corrector Framework to Identify and Resolve VLSI Routing Short Violations(2018-10-24) Fakheri Tabrizi, Aysa; Behjat, Laleh; Rakai, Logan M.; Yanushkevich, Svetlana N.; Dimitrov, VassilThe growth of Very Large Scale Integration (VLSI) technology provokes new challenges in design automation of Integrated Circuits (ICs). Routability is one of the most challenging aspects in Electronic Design Automation (EDA) that is faced in two consecutive phases of physical design: placement and routing. During placement, the exact locations of circuit components are determined. During routing the paths for all of the wires are specified. Routing is performed in two stages: global routing and detailed routing. Many of the violations that occur during the detailed routing stage stem from ignoring the routing rules during placement. Therefore, detecting and preventing routing violations in the placement stage has become critical in reducing the design time and the possibility of failure. In this thesis, Eh?Predictor, a deep learning framework to predict detailed routing short violations during placement is proposed. In the development of this predictor, relevant features, contributing to routing violations, were identified, extracted, and analyzed. A neural network model that can handle imbalanced data was customized to detect these violations using the defined features. The proposed predictor can be integrated into a placement tool and be used as a guide during the placement process to reduce the number of shorts happening in the detailed routing stage. One of the advantages of this technique is that by using the proposed deep learning-based predictor, global routing is no longer required as frequently. Hence the total runtime for place and route can be significantly reduced. In addition to Eh?Predictor, a detailed routing-aware detailed placement algorithm is developed to improve detailed routability in a relatively short runtime. The proposed technique is referred to as Detailed Routing-aware Detailed Placer (DrDp). DrDp is a heuristic that aims to reduce the local congestion and mitigate routing failure by aligning the connected cells where possible at the final stage of detailed placement process. Experimental results show that Eh?Predictor is able to predict on average 90% of the short violations of previously unseen data with only 5% false alarm rate and considerably reduce computational time, and DrDp can effectively improve the detailed routing quality in a short runtime with no significant change in detailed placement score or total wirelength.Item Open Access A Multilevel Congestion-Based Global Router(2009-11-17) Rakai, Logan; Behjat, Laleh; Areibi, Shawki; Terlaky, TamasRouting in nanometer nodes creates an elevated level of importance for low-congestion routing. At the same time, advances in mathematical programming have increased the power to solve complex problems, such as the routing problem. Hence, new routing methods need to be developed that can combine advanced mathematical programming and modeling techniques to provide low-congestion solutions. In this paper, a hierarchical mathematical programming-based global routing technique that considers congestion is proposed. The main contributions presented in this paper include (i) implementation of congestion estimation based on actual routing solutions versus purely probabilistic techniques, (ii) development of a congestion-based hierarchy for solving the global routing problem, and (iii) generation of a robust framework for solving the routing problem using mathematical programming techniques. Experimental results illustrate that the proposed global router is capable of reducing congestion and overflow by as much as 36% compared to the state-of-the-art mathematical programming models.Item Open Access A Net Present Cost Minimization Framework for Wireless Sensor Networks(2016) Dorling, Kevin; Messier, Geoffrey; Magierowski, Sebastian; Messier, Geoffrey; Magierowski, Sebastian; Karl, Holger; Ghaderi, Majid; Sesay, Abu-Bakarr; Behjat, LalehMinimizing the cost of deploying and operating a wireless sensor network (WSN) involves deciding how to partition a budget between competing expenses such as node hardware, energy, and labour. To determine if funds are given to a specific project or invested elsewhere, companies often use interest rates to sum the project's cash flows in terms of present-day dollars. This provides an incentive to defer expenditures when possible and use the returns to reduce future costs. In this thesis, a framework is proposed for minimizing the net present cost (NPC) of a WSN by optimizing the number of, cost of, and time between expenditures. The proposed framework balances competing expenses and defers expenditures when possible. A similar strategy does not appear to be available in the literature, and has likely not been developed in industry as no commercial WSN operators currently exist. In general, NPC minimization is a non-linear, non-convex optimization problem. However, if the time until the next expenditure is linearly proportional to the cost of the current expenditure, and the number of maintenance cycles is known in advance, the problem becomes convex and can be solved to global optimality. If non-deferrable recurring costs are low, then evenly spacing the expenditures can provide near-optimal results. The NPC minimization framework is most effective when non-deferrable recurring costs, such as labour, are low. High labour costs limit the number of times that a WSN operator can use the returns from investing deferrable costs to decrease future expenditures. This thesis therefore proposes vehicle routing problems (VRPs) to reduce labour costs by delivering nodes with drones. Unlike similar VRPs, drone costs are reduced by reusing vehicles, and low-cost, feasible routes are ensured by modelling energy consumption as a function of drone battery and payload weight. The problems are modelled as mixed integer linear programs (MILPs). As these MILPs are NP-hard, simulated annealing algorithms are proposed for finding sub-optimal solutions to large instances of the problems.Item Open Access A Pre-Placement Individual Net Length Estimation Model and an Application for Modern Circuits(2011) Farshidi, Amin; Behjat, Laleh; Westwick, DavidItem Open Access A pre-placement net length estimation technique for mixed-size circuits and a feedback framework for clustering(2009) Fathi, Bahareh; Behjat, LalehItem Open Access A Visual Tool for Comparing the Life Cycles of Major Energy Sources in Alberta(2017) Karbalaei, Amir Hassan; Behjat, Laleh; Gates, Ian; Nowicki, Edwin; Moshirpour, Mohammad; Bergerson, JouleAir pollution has become one of the major challenges, with which humanity continues to struggle in modern times. The never-ending demand for energy has increased the production and consumption of different types of energy resources. A major part of the energy demand is met through the use of fossil fuels, which is one of the main sources of air pollution. In this thesis, a simple Life Cycle Assessment (LCA) model was created to represent the Greenhouse Gas (GHG) pollution of coal, natural gas and oil sands in Alberta. The model was complemented by developing a user-friendly and interactive visualization tool: The Greener Alberta, using recent software development techniques. The visualization tool enables the general public, especially younger generations, to learn about the industry procedures and to compare GHG emission rates of each energy resource. Finally, research surveys were conducted to verify the effectiveness of the Greener Alberta.Item Open Access Advanced Methods for Efficient Digital Signal Processing and Matrix-Based Computations(2018-04-12) Gomes Coelho, Diego Felipe; Dimitrov, Vassil S.; Behjat, Laleh; Walus, Konrad; Yanushkevich, Svetlana N.; Jacobson, Michael J.Modern engineering and scientific problems demand a great amount of data processing power. The type of data that needs to be processed varies from application to application. Image processing, genome matching, physics phenomena simulation, and cryptography are a few examples of processing-power demanding applications. In a wide range of those computationally intensive applications, the arithmetic complexity plays an important role, having direct impact on the implementation performance. In this thesis, we present several methods that are novel contributions of the author to some computationally intensive problems. The introduced methods reduce the overall computing time or other relevant hardware’ and software implementation metrics by decreasing the arithmetic complexity associated with each task. Verified results are shown with peer-reviewed journal papers in reputable journals. In particular, problems on signal processing, eigenvalue computation, and matrix inversion for radar image classification are considered.Item Open Access Analyzing the Role of Theatre in Integrating Immigrants into the Host Society through Communicative Action(2024-07-22) Asgarian, Saeid; Brubaker, Christine Joanne; Barton, Bruce; Behjat, LalehWestern societies are growing and changing rapidly with an increase in immigrants over the last two decades. Every year, many people with different cultures and backgrounds immigrate into modern societies, such as Canadian society. One of the responsibilities of these modern societies is to help immigrants integrate into their new society in a multiplicity of ways. In this research, I analyze how theatre as a social tool can help integrate immigrants into a Canadian context, specifically Calgary, Alberta. This research aims to show how theatre can positively affect the integration of immigrants into their new society leveraging both social and performance studies sciences. Anchored in the theories of social scientist Jürgen Habermas, I define the terms ‘society’ and ‘modern society’ and identify success criteria for integration in said ‘modern society.’ I then conduct an overview of various definitions of theatre and performance experiences utilizing the theories of seminal stage directors Bertolt Brecht, Peter Brook, Jerzy Grotowski, and Augusto Boal. In doing so, I attempt to create a framework and understanding of the characteristics of this art form which will support my analysis. I propose a relationship between the commonalities in these theatre styles and Jürgen Habermas' theory of communicative action. In addition, I apply this framework further by leveraging the theories of Nelson Goodman and Irving Goffman to illuminate how a theatre group can be a small sample of society in which to practice the theory of communicative action. Finally, using a Practice as Research (PaR) methodology, I share qualitative data obtained from two case study performances I created and directed in my graduate work, Green Key (2023), and Absence (2023), where I demonstrate how I used the theory of communicative action in rehearsals. This research asserts that theatre can significantly impact the audience's lifeworld, awareness, and perspective. In this way, it can benefit the integration of unintegrated groups such as immigrants.Item Open Access Applying Reinforcement Learning to Physical Design Routing(2024-04-26) Gandhi, Upma; Behjat, Laleh; Bustany, Ismail S. K.; Yanushkevich, Svetlana; Taylor, Matthew E.Global routing is a significant step in designing an Integrated Circuit (IC). The quality of the global routing solution can affect its efficiency, functionality, and manufacturability. The Rip-up and Re-route (RRR) approach to global routing is widely used to generate solutions iteratively by ripping nets that cause violations and re-routing them. The main objective of this thesis is to model a complex problem such as global routing as an RL problem and test it on practical-sized routing benchmarks available in academia. The contributions presented in this thesis concentrate on automating the RRR approach by applying reinforcement learning (RL). The advantage of the RL over other machine learning-based models is that it can address the scarcity of data in the global routing field. All contributions model the RRR as an RL problem and present developed frameworks to generate solutions. The first contribution presented is called β Physical Design Router (β-PD-Router). Router and Ripper agents in this contribution are trained to resolve short violations on sample-sized circuits with size-independent features. β-PD-Router achieved ∼ 94 % accuracy to resolve violations on unseen netlists. An RL-based Ripper Framework has been developed as the second contribution to train a Ripper agent with the Advantage Actor-Critic RL algorithm to minimize short violations. One of the most current benchmark suits is used to test the performance of RL-Ripper. The third contribution discussed in this thesis is called the Ripper Framework 2.0, an extension to the Ripper Framework. It focuses on improving the generalizability of bigger designs by applying the Deep Q-Networks RL algorithm. After the first iteration of detailed routing, the guide generated with Ripper Framework 2.0 outperforms the state-of-the-route global router in the number of violations.Item Open Access Approaches to the Calibration of Single-Cell Cardiac Models Based on Determinants of Multi-Cellular Electrical Interactions(2021-03-17) Pouranbarani, Elnaz; Nygren, Anders; Rose, Robert; Behjat, Laleh; Di Martino, Elena; Dubljevic, StevanCardiac single-cell models are often used as building blocks for tissue simulation. Cellular models can successfully reproduce the expected behaviour at the tissue scale, providing that a single cell is accurately modeled. One of the imprecisions of conventional cellular modeling, evident mainly when the models are used at the tissue level, stems from only considering some cellular properties (e.g., action potential (AP) shape) and ignoring properties that reflect interconnections of the cells in the calibration/optimization process. This can result in inaccurate modeling of intercellular electrical communications. Computational models are used in a well-known Safety Pharmacology paradigm (i.e., the Comprehensive in Vitro Proarrhythmia Assay), the goal of which is to evaluate drug effects on the occurrence of proarrhythmia. So, accurate characterization of cellular models’ properties is of great importance. In this thesis, a cellular multi-objective optimization framework is proposed to consider the fitness of membrane resistance (Rm) (i.e., an indicator of cellular interconnection) in addition to AP as an additional optimization objective. As Rm depends on the transmembrane voltage (Vm) and exhibits singularities for some specific values of Vm, analyses are conducted to carefully select the regions of interest for the proper characterization of Rm. To verify the efficacy of the proposed problem formulation, case studies and comparisons are carried out using human cardiac ventricular models. Afterward, the performance of the framework proposed in the previous step is analyzed at the tissue-level using various tissue configurations: source-sink configuration, Purkinje-myocardium configuration, and transmural APD heterogeneity configuration. The comprehensive statistical analyses suggest that considering Rm in the calibration procedure results in a significant reduction of errors in cardiac tissue simulations. In the subsequent step, due to the variations among tissue simulations, it is proposed to include more essential properties to constrain the calibration problem. To achieve this, a machine learning-based approach is presented. The numerical results show that the proposed method efficiently estimates the base model’s parameters. Therefore, using a calibrated model as building blocks of tissue simulation yields accurate replication of reference behaviour at the tissue scale.Item Open Access Automated Software Testing of Deep Neural Network Programs(2020-09-23) Vahdat Pour, Maryam; Hemmati, Hadi; Behjat, Laleh; Far, Behrouz HomayounMachine Learning (ML) models play an essential role in various applications. Specifically, in recent years, Deep neural networks (DNN) are leveraged in a wide range of application domains. Given such growing applications, DNN models' faults can raise concerns about its trustworthiness and may cause substantial losses. Therefore, detecting erroneous behaviours in any machine learning system, specially DNNs is critical. Software testing is a widely used mechanism to detect faults. However, since the exact output of most DNN models is not known for a given input data, traditional software testing techniques cannot be directly applied. In the last few years, several papers have proposed testing techniques and adequacy criteria for testing DNNs. This thesis studies three types of DNN testing techniques, using text and image input data. In the first technique, I use Multi Implementation Testing (MIT) to generate a test oracle for finding faulty DNN models. In the second experiment, I compare the best adequacy metrics from the coverage-based criteria (Surprise Adequacy) and the best example from mutation-based criteria (DeepMutation) in terms of their effectiveness for detecting adversarial examples. Finally, in the last experiment, I applied three different test generation techniques (including a novel technique) to the DNN models and compared their performance if the generated test data are used to re-train the models. The first experiment results indicate that using MIT as a test oracle can successfully detect the faulty programs. In the second study, the results indicate that although the mutation-based metric can show better performance in some experiments, it is sensitive to its parameters and requires hyper-parameter tuning. Finally, the last experiment shows a 17% improvement in terms of F1-score, when using the proposed approach in this thesis compared to the original models from the literature.Item Open Access Behavioral Mapping by Using NLP to Predict Individual Behaviors(2022-11-16) Jafari, Reyhaneh; H. Far, Behrouz; Behjat, Laleh; Moussavi, Mahmood"What candidates or team members will do in specific circumstances" has always been an important piece of information for most employees or team leaders to consider when making a decision. It takes a significant amount of time to determine who is the best candidate for a particular job position. Companies are looking for the most efficient method for making this decision. Most of the time, personality assessments are used to identify an individual’s character traits. Regardless of personality type, individuals will behave differently in a positive atmosphere than in a stressful one. Hence, characteristics alone can not predict behavior. Thus, text analysis and the identification of candidate behaviors (behaviorism) now enable companies to understand how people think, feel, and act in a given situation and then choose from a vast pool of candidates the best candidate for the job. By leveraging the existing intellectual property data associated with the behavioral mapping in AccuMatch Behavior Intelligence as well as expert data and using tools such as Amazon Comprehend Service (ACS), IBM Watson Natural Language Understanding (NLU), and Machine Learning (ML) techniques, various methods have been developed and analyzed in order to predict how individuals in a team become motivated, what their individual decision reference is, and what their execution style is. Therefore, this dissertation presents multiple proposed methods to predict Towards/Away, Internal/External, and Option/Procedure behavior and discusses the rationale behind the selection of these methods along with the results obtained.Item Open Access Clustering techniques for circuit partitioning and placement problems(2007) Li, Jianhua; Behjat, Laleh; Jullien, GrahamItem Open Access Connected and Autonomous Vehicles Trajectory Optimization for an On-Ramp Freeway Merging Segment in a Mixed Vehicular Traffic Environment(2024-02-15) Hesabi Hesari, Abbas; Kattan, Lina; Behjat, Laleh; Zangeneh, PouyaEfficient and smooth merging processes on highways are critical for ensuring traffic safety, flow, and network efficiency. While traditional techniques, such as ramp metering and variable speed limits, offer benefits, their ability to optimize highway throughput remains limited. The emergence of connected and automated vehicles (CAVs) in road networks holds promise for enhancing transportation network efficiency and safety. This research develops a novel control algorithm focused on optimizing autonomous vehicle trajectories in a mixed traffic environment on a multilane highway. The objective is to eliminate stop-and-go conditions during merging and enhance a smooth and safe merging maneuver. The merging process is outlined in a hierarchical hybrid control framework that consists of three control layers: the top control layer, the intermediate tactical control layer, and the lower operational layer. At the top level, the controller gathers data from the real-time traffic environment and identifies a group of vehicles most impacted by the merging maneuvers. This information is relayed to the mid-layer, which then establishes a multiphase kinematic model of the system. Utilizing this model, the tactical controller designs a multi-input-multi-output (MIMO) model predictive control (MPC) scheme that optimizes autonomous vehicle trajectories while adhering to various constraints. At the operational level, CAVs employ optimized trajectories as reference signals for executing essential longitudinal and lateral maneuvers during merging operations. A PI (Proportional-Integral) control scheme regulates longitudinal maneuvers, while a PID (Proportional-Integral-Derivative) control scheme manages lateral maneuvers, and both schemes consider the distinctive vehicle dynamics of each CAV. Through comprehensive simulations that encompass diverse driving scenarios, the hybrid technique demonstrates reliability, robustness, and precision across variable initial conditions. This study also introduces a novel centralized control technique that integrates the control layers of the hybrid control system into a single layer and manages the entire merging process in a continuous motion. Comparing the hybrid and centralized control techniques demonstrates that the hybrid approach has remarkable computational efficiency and showcases higher robustness against model uncertainties and communication disturbances. In contrast, the centralized controller exhibits better stability and control performance with higher fuel efficiency and passenger comfort.Item Open Access Critical Pedagogical Interventions in Engineering: Deconstructing Hierarchical Dualisms to Expand the Narratives of Engineering Education(2024-01-12) Paul, Robyn Mae; Brennan, Robert; Behjat, Laleh; Eggermont, Marjan; Black, Kerry; Sun, Qiao; Sengupta, Pratim; Lord, SusanEngineering in the western world is often framed as neutral or apolitical, meaning engineering education trains engineers to take little responsibility for perpetuating society’s biases through our technologies (such as racism, colonialism, and environmental degradation). In this thesis, I argue that as problem solvers and critical thinkers involved in the world’s biggest challenges, it is our ethical responsibility to unmask the hidden belief systems and dominant narratives that currently drive the engineering sector. Within our society, dualisms are embedded across our value systems, such as the dualisms of woman-man, emotion-reason, nature-culture, and social-technical. These dualisms exist as opposites, exclusive, and in a value-hierarchy (i.e. man-reason-culture-technical are typically viewed in exclusive opposition and valued higher than woman-emotion-nature-social). This thesis uses the hierarchical dualisms pedagogical framework to bring light to the normative cultures of engineering education and aims to support engineering education communities in increasing their critical consciousness and becoming aware of dominant value systems. Thus, my primary research question is: How do we design practices that unmask the hierarchical dualisms to build expanded narratives of engineering and engineering education? I answer this question by (1) outlining a framework of hierarchical dualisms and dominant narratives including illustrative case studies; (2) summarizing two pedagogical innovations I designed and implemented to unmask different hierarchical dualisms; and (3) analyzing my own writing for dominant narratives through a discourse analysis. Throughout, this thesis takes a non-traditional research approach to align my methodology with the epistemological assumptions of the research paradigm. I leverage dialogicity, relationality, and storytelling methodologies to describe my journey of doing paradigm shifting work in the field of engineering education. Overall, this thesis found that through increasing critical consciousness, broadening our systems thinking, engaging in interdisciplinary dialogue, being willing to transcend engineering boundaries, and imagining radical futures we can create momentum for emergent change that will foster liberatory education. As educators, students’ four years of undergraduate engineering in academia are our great opportunity to radically transform engineering students’ way of thinking about technology and design, and give them the skills and tools to radically transform the purpose of engineering.Item Open Access Decoupling Methods for the Identification of Polynomial Nonlinear Autoregressive Exogenous Input Models(2020-08) Karami, Kiana; Westwick, David T.; Behjat, Laleh; Nielsen, Jorgen; Norwicki, Edwin Peter; Bai, ErweiDeveloping a mathematical model of the system to be controlled is a significant part of a control design process, the more accurate the model, the more precise the control design can be. Since most of the systems around us behave nonlinearly, linear models are not always adequate. Therefore, nonlinear models should be considered. Many different model structures have been used in the literature such as the Volterra series, block-structured models, state-space representations, or nonlinear input-output models. Each of these structures has the ability to represent a large class of nonlinear systems but with its own drawbacks. Many are black-box modeling approaches and do not provide any intuition regarding the system. Many also suffer from the curse of dimensionality, becoming overly complex as the severity of the nonlinearity increases. It would be more practical if a nonlinear system can be approximated by a simpler model that is more accessible and understandable. During this research, the focus was on one of the widely used nonlinear input-output models known as the polynomial Nonlinear Autoregressive eXogenous input (NARX) model, as it represents a large class of nonlinear systems. A decoupling approach is proposed for polynomial NARX models. This technique replaces the multivariate polynomial that characterizes the NARX model with a decoupled model comprising a mixing matrix followed by a bank of univariate polynomials and a summation. While the proposed decoupling algorithm reduces the number of parameters significantly, performing the decoupling involves solving a non-convex optimization, which must be solved iteratively. Different initialization techniques are proposed for this optimization. In addition, identification algorithms are developed in both prediction error and simulation error minimization frameworks. The results of the decoupling approach are verified on two nonlinear identification benchmark problems and show promising outcomes since the number of parameters decreases significantly while the model accuracy remains high. Also, the decoupled model is capable of providing some insight into the identified model, as it is no longer a black-box.Item Open Access Degree-based clustering for placement in vlsi physical design(2008) Huang, Jie; Behjat, Laleh