Applying Reinforcement Learning to Physical Design Routing

dc.contributor.advisorBehjat, Laleh
dc.contributor.authorGandhi, Upma
dc.contributor.committeememberBustany, Ismail S. K.
dc.contributor.committeememberYanushkevich, Svetlana
dc.contributor.committeememberTaylor, Matthew E.
dc.date2024-05
dc.date.accessioned2024-04-30T18:09:25Z
dc.date.available2024-04-30T18:09:25Z
dc.date.issued2024-04-26
dc.description.abstractGlobal routing is a significant step in designing an Integrated Circuit (IC). The quality of the global routing solution can affect its efficiency, functionality, and manufacturability. The Rip-up and Re-route (RRR) approach to global routing is widely used to generate solutions iteratively by ripping nets that cause violations and re-routing them. The main objective of this thesis is to model a complex problem such as global routing as an RL problem and test it on practical-sized routing benchmarks available in academia. The contributions presented in this thesis concentrate on automating the RRR approach by applying reinforcement learning (RL). The advantage of the RL over other machine learning-based models is that it can address the scarcity of data in the global routing field. All contributions model the RRR as an RL problem and present developed frameworks to generate solutions. The first contribution presented is called β Physical Design Router (β-PD-Router). Router and Ripper agents in this contribution are trained to resolve short violations on sample-sized circuits with size-independent features. β-PD-Router achieved ∼ 94 % accuracy to resolve violations on unseen netlists. An RL-based Ripper Framework has been developed as the second contribution to train a Ripper agent with the Advantage Actor-Critic RL algorithm to minimize short violations. One of the most current benchmark suits is used to test the performance of RL-Ripper. The third contribution discussed in this thesis is called the Ripper Framework 2.0, an extension to the Ripper Framework. It focuses on improving the generalizability of bigger designs by applying the Deep Q-Networks RL algorithm. After the first iteration of detailed routing, the guide generated with Ripper Framework 2.0 outperforms the state-of-the-route global router in the number of violations.
dc.identifier.citationGandhi, U. (2024). Applying reinforcement learning to physical design routing (Doctoral thesis, University of Calgary, Calgary, Canada). Retrieved from https://prism.ucalgary.ca.
dc.identifier.urihttps://hdl.handle.net/1880/118559
dc.identifier.urihttps://doi.org/10.11575/PRISM/43401
dc.language.isoen
dc.publisher.facultyGraduate Studies
dc.publisher.institutionUniversity of Calgary
dc.rightsUniversity of Calgary graduate students retain copyright ownership and moral rights for their thesis. You may use this material in any way that is permitted by the Copyright Act or through licensing that has been assigned to the document. For uses that are not allowable under copyright legislation or licensing, you are required to seek permission.
dc.subjectReinforcement Learning
dc.subjectGlobal Routing
dc.subjectPhysical Design
dc.subject.classificationEngineering--Electronics and Electrical
dc.subject.classificationComputer Science
dc.subject.classificationArtificial Intelligence
dc.titleApplying Reinforcement Learning to Physical Design Routing
dc.typedoctoral thesis
thesis.degree.disciplineEngineering – Electrical & Computer
thesis.degree.grantorUniversity of Calgary
thesis.degree.nameDoctor of Philosophy (PhD)
ucalgary.thesis.accesssetbystudentI do not require a thesis withhold – my thesis will have open access and can be viewed and downloaded publicly as soon as possible.
Files
Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
ucalgary_2024_gandhi_upma.pdf
Size:
16.06 MB
Format:
Adobe Portable Document Format
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
2.62 KB
Format:
Item-specific license agreed upon to submission
Description: