Browsing by Author "Chan, Sonny"
Now showing 1 - 16 of 16
Results Per Page
Sort Options
Item Open Access Alignment of Freehand 2-D Ultrasound Images for 3-D Volumetric Reconstruction(2018-04-23) Onyejekwe, Nwanneka Okeoghene; Chan, Sonny; Khorshid, Mohammad; Boyd, Jeffrey EdwinUltrasound is an imaging modality used in many medical procedures including the diagnosis of diseases in Neonates. We aim to develop a Computer simulation using 3-D ultrasound to teach clinicians how to perform Neonatal head Ultrasound. To achieve this, we need to align a series of 2-D ultrasound data to reconstruct the 3-D volume. The source data was acquired by freehand movement of the ultrasound probe which results in misalignment and incorrect spacing/angle of the 3-D data. Previous techniques used to solve this problem rely on speckles in the ultrasound images. We devised an algorithm to automatically correct the misalignment using 2-D registration and estimate the elevational spacing/angle by optimizing a cost function against a perpendicular reference Image. After running the algorithm on the dataset, we observed significant improvement in the alignment and 3-D structure. This algorithm will facilitate generating 3-D volume data needed to develop a high quality Simulator.Item Open Access Assessment of a virtual reality temporal bone surgical simulator: a national face and content validity study(2020-04-07) Compton, Evan C; Agrawal, Sumit K; Ladak, Hanif M; Chan, Sonny; Hoy, Monica; Nakoneshny, Steven C; Siegel, Lauren; Dort, Joseph C; Lui, Justin TAbstract Background Trainees in Otolaryngology–Head and Neck Surgery must gain proficiency in a variety of challenging temporal bone surgical techniques. Traditional teaching has relied on the use of cadavers; however, this method is resource-intensive and does not allow for repeated practice. Virtual reality surgical training is a growing field that is increasingly being adopted in Otolaryngology. CardinalSim is a virtual reality temporal bone surgical simulator that offers a high-quality, inexpensive adjunct to traditional teaching methods. The objective of this study was to establish the face and content validity of CardinalSim through a national study. Methods Otolaryngologists and resident trainees from across Canada were recruited to evaluate CardinalSim. Ethics approval and informed consent was obtained. A face and content validity questionnaire with questions categorized into 13 domains was distributed to participants following simulator use. Descriptive statistics were used to describe questionnaire results, and either Chi-square or Fishers exact tests were used to compare responses between junior residents, senior residents, and practising surgeons. Results Sixty-two participants from thirteen different Otolaryngology–Head and Neck Surgery programs were included in the study (32 practicing surgeons; 30 resident trainees). Face validity was achieved for 5 out of 7 domains, while content validity was achieved for 5 out of 6 domains. Significant differences between groups (p-value of < 0.05) were found for one face validity domain (realistic ergonomics, p = 0.002) and two content validity domains (teaching drilling technique, p = 0.011 and overall teaching utility, p = 0.006). The assessment scores, global rating scores, and overall attitudes towards CardinalSim, were universally positive. Open-ended questions identified limitations of the simulator. Conclusion CardinalSim met acceptable criteria for face and content validity. This temporal bone virtual reality surgical simulation platform may enhance surgical training and be suitable for patient-specific surgical rehearsal for practicing Otolaryngologists.Item Open Access Automated Performance Assessment of Virtual Temporal Bone Dissection(2020-07-21) Sachan, Surbhi; Chan, Sonny; Alim, Usman R.; Boyd, Jeffrey Edwin; Forkert, Nils DanielMastoidectomy is a surgical procedure in which a portion of the temporal bone is removed by using fine microsurgical skills. Development of virtual reality simulators with high-fidelity visual, auditory, and force feedback has allowed trainees to learn this skill in a safe environment without the limitations associated with the traditional way of learning, i.e., cadaveric specimens. However, without an automatic feedback mechanism, an expert's presence is required to assess the performance, placing a heavy burden on their time. This investigation focuses on automating the performance evaluation obviating the need for an expert's time. This is accomplished by automating the criteria based on a well-established and validated assessment instrument known as the Welling Scale, to score the mastoidectomy performed on a virtual surgery simulator. Image processing algorithms are devised and run on the output of the virtual surgery to automatically score these criteria. The criteria are described in terms of four functional categories: Identification, Skeletonization, Intactness and No cells. Algorithms are devised for each of these categories. This work further validates the accuracy of these algorithms by doing a study where these criteria are evaluated by two experts, as well as the work done in this thesis. The results of the study show that automatic performance assessment of virtual mastoidectomy surgery is feasible.Item Open Access Automatic Classification of Idiopathic Parkinsonian Disease and Progressive Supranuclear Palsy using Multi-Spectral MRI Datasets: A Machine Learning Approach(2018-09-19) Talai, Aron Sahand; Forkert, Nils Daniel; Monchi, Oury; Chan, SonnyParkinson's disease, which is characterized by a range of motor and non-motor symptoms is categorized into classical Parkinsonian disease (PD) and atypical Parkinsonian syndromes (APS), such as progressive supranuclear palsy Richardson’s syndrome (PSP-RS). The differential diagnosis between PD and PSP-RS is often challenged by similarity of early symptoms, effectively resulting in considerable misclassification rates. The aim of this thesis is to assess the benefits of using biomarkers from multi-modal MRI datasets in the accurate classification of PD vs. PSP-RS. Multi-spectral information form T1-, T2-, and diffusion-weighted (DWI) MRI from 38 healthy controls (HC), 45 PD, and 20 PSP-RS subjects were available for this study. In detail, morphological (category 1), brain iron marker (category 2), and diffusion features (category 3) were employed. In the last category, all feature types were combined (combinational) for the development of a machine learning model. Nested leave-one-out-cross validation was used to evaluate the classification performance in each category followed by a 1000 permutation test to assess classification significance. The results suggest that, the DWI based classifier tied with the combinational approach in terms of overall accuracy. However, in the former, the specificity was lower by 10%. In detail, 4 PSP-RS and 1 PD subjects are incorrectly classified as PD and PSP-RS in the combinational approach resulting in a sensitivity and specificity of 91.67% and 94.12%, respectively. The obtained results indicate that features extracted from T1- and T2-weighted MRI perform worst based on overall accuracy. All classification categories were statistically significant (p<0.001). In conclusion, combination of features from different MRI modalities such as T1-, T2-, and diffusion-weighted datasets improves the multi-level classification performance of HC vs. PD vs.PSP-RS compared to single modality features, particularly in terms of PD vs. other differentiation. The results and concepts discussed in this research thesis have wide ranging implication for future developments of computer-aided diagnosis of PD sub-syndromes.Item Open Access Correction to: Assessment of a virtual reality temporal bone surgical simulator: a national face and content validity study(2020-04-22) Compton, Evan C; Agrawal, Sumit K; Ladak, Hanif M; Chan, Sonny; Hoy, Monica; Nakoneshny, Steven C; Siegel, Lauren; Dort, Joseph C; Lui, Justin TFollowing publication of the original article [1], the authors identified incorrect ordering and incorrect files being used for Figs. 1, 2 and 3.Item Open Access Designing NeuroSimVR: A Stereoscopic Virtual Reality Spine Surgery Simulator(2017-11-01) Mostafa, Ahmed E.; Ryu, Won Hyung A.; Chan, Sonny; Takashima, Kazuki; Kopp, Gail; Costa Sousa, Mario; Sharlin, EhudThis paper contributes NeuroSimVR, a stereoscopic virtual reality spine surgery simulator that allows novice surgeons to learn and practice a spinal pedicle screw insertion (PSI) procedure using simplified interaction capabilities and 3D haptic user interfaces. By collaborating with medical experts and following an iterative approach, we provide characterization of the PSI task, and derive requirements for applying this procedure in a 3D immersive interactive simulation system. We describe how these requirements were realized in our NeuroSimVR prototype, and outline the educational benefits of our 3D interactive system for training the PSI procedure. We conclude the paper with the results of a preliminary evaluation of NeuroSimVR and reflect on our interface benefits and limitations.Item Open Access Examining the utility of a photorealistic virtual ear in otologic education(2023-02-22) Shin, Dongho; Batista, Arthur V.; Bell, Christopher M.; Koonar, Ella R. M.; Chen, Joseph M.; Chan, Sonny; Dort, Joseph C.; Lui, Justin T.Abstract Background Otolaryngology–head and neck surgical (OHNS) trainees’ operating exposure is supplemented by a combination of didactic teaching, textbook reading, and cadaveric dissections. Conventional teaching, however, may not adequately equip trainees with an understanding of complex visuospatial relationships of the middle ear. Both face and content validation were assessed of a novel three-dimensional (3D) photorealistic virtual ear simulation tool underwent face and content validation as an educational tool for OHNS trainees. Methods A three-dimensional mesh reconstruction of open access imaging was generated using geometric modeling, which underwent global illumination, subsurface scattering, and texturing to create photorealistic virtual reality (VR) ear models were created from open access imaging and comiled into a educational platform. This was compiled into an educational VR platform which was explored to validate the face and content validity questionnaires in a prospective manner. OHNS post-graduate trainees were recruited from University of Toronto and University of Calgary OHNS programs. Participation was on a voluntary basis. Results Total of 23 OHNS post-graduate trainees from the two universities were included in this study. The mean comfort level of otologic anatomy was rated 4.8 (± 2.2) out of 10. Senior residents possessed more otologic surgical experience (P < 0.001) and higher average comfort when compared to junior residents [6.7 (± 0.7) vs. 3.6 (± 1.9); P = 0.001]. Face and content validities were achieved in all respective domains with no significant difference between the two groups. Overall, respondents believed OtoVIS was a useful tool to learn otologic anatomy with a median score of 10.0 (8.3–10.0) and strongly agreed that OtoVIS should be added to OHNS training with a score of 10.0 (9.3–10.0). Conclusions OtoVIS achieved both face and content validity as a photorealistic VR otologic simulator for teaching otologic anatomy in the postgraduate setting. As an immersive learning tool, it may supplement trainees’ understanding and residents endorsed its use. Graphical AbstractItem Open Access Leveraging Neuroscience to Improve Haptic Rendering(2019-07-26) Kollannur, Sandeep Zechariah George; Chan, Sonny; Oehlberg, Lora; Peters, RyanIn the evolving world of virtual reality (VR), haptics plays an important role. Haptics enables individuals to feel and experience the virtual world and to immerse fully in the virtual environment. Haptics has a long history in which researchers have created various types of devices. Virtual reality brings in the need for highly portable and wearable devices which are limited in weight and grounding. Within the restrictive weight and size limitations of mobile and wearable devices limits the haptic rendering capabilities. This thesis attempts to overcome the limitations by developing a better understanding of the biology of touch via sensorimotor neuroscience to improve touch perception in virtual textures. In this research, we explore the receptor cells that encode touch information from mechanical stimuli and send them to the brain. Sensorimotor neuroscience explores the functional role of the different type of receptor cells in the human body. We use this biological perspective to create a classification of wearable haptic devices for the fingertip and the hand. The second part of the research explores the neuroscience concept of stochastic resonance that is proved to improve light touch sensation. We create a system comprising of hardware and software to evaluate the impact of stochastic resonance in a virtual texture discrimination task. I conclude this thesis by exploring the future directions of the research presented in this thesis and summarizing the two main contributions of this thesis: first is to develop a neuroscience-based classification of wearable haptic devices for the fingertip and the hand and second is to build a platform to evaluate the effect of mechanical SR in discriminating virtual textures.Item Open Access Mediating Experiential Learning in Interactive Immersive Environments(2018-01-22) Mostafa, Ahmed; Sharlin, Ehud; Costa Sousa, Mário; Chan, Sonny; Takashima, Kazuki; Boulanger, Pierre; El-Sheimy, NaserSimulation and immersive environments are gaining popularity in various contexts. Arguably, such interactive systems have the potential to benefit many users in a variety of education and training scenarios. However, some of these systems especially with the lack of skilled instructors are still faced by challenges of operational complexity, the incorporation of different technologies and features, and the limited availability of performance measures and feedback. Therefore, the design of these systems would benefit from integrating experiential aspects and essential educational aids. For example, users of such learning systems, especially the novice ones, can be better supported by a smoother learning curve, detailed guidance features, the availability of feedback and performance reporting, and the integration of engaging & reflective capabilities. In essence, we recognize a need to re-explore learning aids and how they impact design, usage, and overall learning experience in interactive immersive environments. The goal of this dissertation is to mediate experiential learning in interactive immersive environments. This includes exploring existing and novel learning aids that would facilitate learning with improved engagement and immersion, enrich learners with insightful reflections, better support novice users’ learning and training needs, and ultimately enhance the overall experience. To achieve this goal, we utilized existing learning models and simulation-based training approaches and proposed a framework of learning aids to mediate learning in interactive immersive environments. Working closely with domain expert collaborators, we designed, implemented and evaluated four new interactive immersive prototypes in an attempt to validate the practicality of our aids. The first prototype, NeuroSimVR, is a stereoscopic visualization augmented with educational aids to support how medical users learn about a common back surgery procedure. The second prototype, ReflectiveSpineVR, is an immersive virtual reality surgical simulation with innovative interaction history capabilities that aim to empower users’ memories and enable deliberate repetitive practice as needed. The third prototype, JackVR, is an interactive immersive training system, utilizing novel gamification elements, and aims to support oil-and-gas experts in the process of landing oil rigs. Our fourth prototype, RoboTeacher, involves a humanoid robot instructor for teaching people industrial assembly tasks. In our prototypes, we presented novel learning aids, visualization, and interaction techniques that are new to many of the current immersive learning tools. We conclude this dissertation with lessons learned and guidelines for designing with learning aids in future research directions that target interactive experiential environments.Item Open Access Methodology of Robot-Assisted Tool Manipulation for Virtual Reality Based Dissection(2019-03-29) Trejo Torres, Fernando Javier; Hu, Yaoping; Sesay, Abu B.; Westwick, David T.; Chan, Sonny; Liu, Peter J.Robot-assisted (RA) surgery employs a master-slave system, in which a surgeon's hand manoeuvres the stylus of a hand controller (master) mapped at the operation site to indirectly manipulate a surgical tool attached to the end-effector of a robot (slave). Hence, RA surgery has two drawbacks. Firstly, the transfer of tool-tissue interaction forces to a surgeon is either absent or inaccurate. Secondly, RA surgery incorporates motion coupling (MC) and motion coupling plus orientation match (MC+OM) as indirect modes of tool manipulation, which disregard a pose (position and orientation) match (PM) between the mapped stylus and the tool. This may cause inadvertent tissue trauma during tasks like dissection, which spends ~35.0% of surgery time. Due to the potential of virtual reality (VR) based surgical training, this thesis presents a methodology to address the drawbacks on a VR simulator of soft-tissue dissection. The methodology comprises the formulations and evaluations of an analytic model that estimates dissection forces; and a PM algorithm. The simulator interfaced with the haptic device PHANToM Premium 1.5/6DOF (as a hand controller) to deliver the model forces, and incorporated the kinematics of the device and neuroArm (a neurosurgery robot) for the PM algorithm. The evaluation of the model for estimating dissection forces collected at the tool speeds of 0.10, 1.27, and 2.54 cm/s indicated a force estimation > 80.0%, a computation time < 1.0 ms (the device's update period), and a bandwidth < 30.0 Hz (the device's bandwidth). Moreover, the model lessened cognitive workload for dissections executed at 0.10 cm/s. The evaluation of the PM algorithm revealed a position match < 30.0 µm (the position resolution of the device and neuroArm), an orientation match < 10.0° (to minimize the surgeon's disorientation), and a computation time < 500.0 µs (a half of the device's update period). Additionally, the algorithm became useful to maintain an accurate tool speed and reduce tissue trauma for dissections performed at 0.10 cm/s. The outcomes imply the suitability of the methodology for VR-based RA dissection and their potential to suggest guidelines for VR-based RA dissection training.Item Open Access Modeling Dense Inflorescences(2017) Owens, Andrew Robert; Prusinkiewicz, Przemyslaw; Alim, Usman; Chan, SonnyShowy inflorescences - clusters of flowers - are a common feature of many plants, greatly contributing to their beauty. The large numbers of individual flowers (florets), systematically arranged in space, make inflorescences a natural target for procedural modeling. This thesis presents a suite of biologically motivated algorithms for modeling and animating the development of inflorescences, each sharing the following characteristics: (i) the ensemble of florets create a relatively smooth, tightly packed, often approximately planar surface; (ii) there are numerous collisions between petals of florets; and (iii) the developmental stages and types of florets each depends upon their positions within the inflorescence. A single framework drives the floral canopy's development and resolves the collisions. Flat-topped branched inflorescences (corymbs and umbels) are modeled using a florets-first algorithm, wherein the branching structure self-organizes to support florets in predetermined positions. This suite of techniques is illustrated with models from several plant families.Item Open Access OtoVIS: A Photorealistic Virtual Reality Environment for Visualizing the Anatomical Structures of the Ear and Temporal Bone(2020-09) Volpato Batista, Arthur; Chan, Sonny; Leblanc, Jean Rene; Eiserman, Jennifer; Furr, Robin S.In the context of medical education, traditional teaching methods present some challenges and limitations related to the representation of anatomical structures. More specifically, in otology – a subspecialty of otolaryngology – residents often need to rely on textbooks which are two-dimensional (2D) and lack definition. Additionally, their training is largely based on endoscopic video footage, which is often plagued with low quality images while lacking annotations, commentary and not providing any form of interaction. These challenges can be mitigated by the use of new technologies such as three-dimensional (3D) visualizations, virtual environments and virtual reality technology (VR). However, there is still room for improvement when looking at some current software applications for medical education and surgical training. They still suffer from technological barriers, which compromise the realism of the experience. Simulating skin, blood, tissue, bone, cartilage, and so forth in a virtual environment can be challenging and demands a considerable amount of time and effort. My thesis investigates what role 3D photorealistic real-time rendering in a VR environment can play in otolaryngology medical education. Through the careful examination of how actual organic anatomy looks and surgical procedures happen, I aim to reduce the gap between traditional anatomy teaching and anatomy seen in real patients, creating more meaningful experiences for medical students. Considering the crucial role that accuracy and precision play within the field of Medicine, I investigate my main research question: “How can photorealistic three-dimensional graphics in virtual reality enhance the anatomy learning experience of the ear structures and temporal bone?”Item Open Access Simulating Mass in Virtual Reality using Vibration Feedback(2021-07-29) Khosravi, Hooman; Samavati, Faramarz; Chan, Sonny; Sharlin, Ehud; Jacob, ChristianVirtual reality allows for highly immersive simulated experiences and interaction with virtual objects. However, virtual objects do not have real masses. Providing the sense of mass for virtual objects using un-grounded haptic interfaces has proven to be a complicated task in virtual reality. This thesis proposes using a physically-based virtual hand and a complementary vibrotactile effect on the index fingertip to give the sensation of mass to objects in virtual reality. The vibrotactile feedback is proportional to the balanced forces acting on the virtual object and is modulated based on the object's velocity. For evaluating this method, we set an experiment in a virtual environment where participants wear a VR headset and attempt to pick up and move different virtual objects using a virtual physically-based hand while a voice-coil actuator attached to their index fingertip provides the vibrotactile feedback. Our experiments indicate that the virtual hand and our vibration effect give the ability to discriminate and perceive the mass of virtual objects.Item Open Access Spatial Partitioning for Distributed Path-Tracing Workloads(2018-09-21) Hornbeck, Haysn; Alim, Usman Raza; Gavrilova, Marina L.; Chan, SonnyThe literature on path tracing has rarely explored distributing workload using distinct spatial partitions. This thesis corrects that by describing seven algorithms which use Voronoi cells to partition scene data. They were tested by simulating their performance with real-world data, and fitting the results to a model of how such partitions should behave. Analysis shows that image-centric partitioning outperforms other algorithms, with a few exceptions, and restricting Voronoi centroid movement leads to more efficient algorithms. The restricted algorithms also demonstrate excellent scaling properties. Potential refinements are discussed, such as voxelization and locality, but the tested algorithms are worth further exploration. The details of an implementation are outlined, as well.Item Open Access Three-dimensional medical image registration on modern graphics processors(2007) Chan, Sonny; Mitchell, J. Ross; Parker, James R.Item Open Access Visualization of Multivariate Data on Surfaces(2019-03-19) Rocha, Allan; Costa Sousa, Mario; Alim, Usman Raza; Chan, Sonny; Jacob, Christian J.; Geiger, Sebastian; Tominski, ChristianIn several domains of science and applications, the understanding of scientific data leads to technological advances and scientific discovery. Multivariate 3D data, for example, is essential for decision-making in fields such as Medicine and Geology, where experts are required to understand and correlate several spatial attributes. To simplify complexity and facilitate understanding, the 3D data is often explored through surfaces of interest. This is the reason why the visualization of multivariate data on surfaces has been a topic of interest among the visualization community. However, much work has been needed to provide visualization solutions that facilitate the multivariate visualization design, creation, and exploration. This research builds upon ideas introduced and discussed many years ago that focus on the problem of visualizing multiple attributes on surfaces in a single view. Here I present a new perspective to this problem as well as a solution that allows us to design, visualize and interact with multivariate data on surfaces. This perspective is created from the combination of several aspects born in fields such as Illustration, Perception, and Design, that have been employed and studied by the visualization community both in Information and Scientific Visualization. Therefore, this thesis lies between these two main fields, since it involves aspects from both. By building upon this multidisciplinary combination, I present a new way to visualize multivariate data on surfaces by exploiting the concept of layering. First, I introduce a new real-time rendering technique and the concept of Decal-Maps, which fills a gap in the literature and allow us to create 2D visual representations such as glyphs that follow the surface geometry. Building on this technique, I propose the layering framework to facilitate the multivariate visualization design on surfaces. The use of this concept and framework allows us to connect and generalize concepts established in flat space, such as 2D maps, to arbitrary surfaces. This thesis also demonstrates that the design of new multivariate visualizations on surfaces opens up other new possibilities such as the use of interaction techniques. Here I demonstrate this potential by introducing a new interaction technique that allows us to explore multivariate data and to create customized focus+context visualizations on surfaces. This is achieved by introducing a new category of lenses, Decal-Lenses, which extends the concept of magic-lenses from flat space to general surfaces. Finally, this thesis showcases the process of multivariate visual design and data exploration through a series of examples from several domains. Inspired by these examples, I also contribute with an in-depth application research conducted from my long-term collaboration with domain experts in the fields of Geology and Reservoir Engineering. This application illustrates how the proposed approach can support and facilitate decision-making in the complex process of Geological Modelling.