Browsing by Author "Somanath, Sowmya"
Now showing 1 - 10 of 10
Results Per Page
Sort Options
Item Open Access Communicating Awareness and Intent in Autonomous Vehicle-Pedestrian Interaction(2017-10-26) Mahadevan, Karthik; Somanath, Sowmya; Sharlin, EhudDrivers use nonverbal cues such as vehicle speed, eye gaze, and hand gestures to communicate awareness and intent to pedestrians. Conversely, in autonomous vehicles, drivers can be distracted or absent, leaving pedestrians to infer awareness and intent from the vehicle alone. In this paper, we investigate the usefulness of interfaces (beyond vehicle movement) that explicitly communicate awareness and intent of autonomous vehicles to pedestrians, focusing on crosswalk scenarios. We conducted a preliminary study to gain insight on designing interfaces that communicate autonomous vehicle awareness and intent to pedestrians. Based on study outcomes, we developed four prototype interfaces and deployed them in studies involving a Segway and a car. We found interfaces communicating vehicle awareness and intent: (1) can help pedestrians attempting to cross; (2) are not limited to the vehicle and can exist in the environment; and (3) should use a combination of modalities like visual, auditory, and physical.Item Open Access Design of Anthropomorphic Interfaces for Autonomous Vehicle-Pedestrian Interaction(2023-01) Wei, Wei; Sharlin, Ehud; Chen, Zhangxing; Sharlin, Ehud; Oehlberg, Lora; Somanath, SowmyaAutonomous Vehicle (AV) technology promises to revolutionize human life. The promise of AVs includes reduced highway congestion, more efficient energy usage, and cheaper goods and services. However, without careful design, removing human drivers from vehicles will eliminate the natural communication channels which enable pedestrians to navigate safely. This thesis aims to design, present, and study anthropomorphic interfaces for autonomous vehicles, with the objective of enabling AVs to communicate with pedestrians through non-verbal cues. Non-verbal human communication is vital in human relationships. People use non-verbal communication when speech is impractical, such as when interacting with vehicles. When looking into ways in which AVs can use non-verbal communication to interact with pedestrians, we were inspired by the prospect of using anthropomorphic interfaces. This concept is well explored in Human-Robot Interaction (HRI) but has not been investigated in the context of AVs. For this thesis, we explored the design of anthropomorphic interfaces for autonomous vehicles. First, we proposed three types of anthropomorphic interfaces for AVs: facial expressions, hand gestures, and humanoid torsos. We developed a design space for each category using sketches and a low-fi prototype. Then, to research the benefits and limitations of anthropomorphic AVs, we implemented our AV interfaces in a Virtual Reality (VR) environment and developed two testbeds to evaluate their feasibility and scalability. Finally, we conducted two studies using the two testbeds. We investigated the study results using immersive analytics alongside traditional methods and revealed that anthropomorphic AVs could be helpful in AV-pedestrian interaction when designed by specific guidelines. Since we studied anthropomorphic AVs in VR, we were interested in the possibilities of analyzing the data of our study in an immersive environment. We designed a VR prototype specifically to analyze the data collected from the anthropomorphic AV study. The prototype provided basic immersive analytics features for the AV study data. We conducted an expert session with two domain experts to evaluate our immersive analytics prototype. The study contributed insights into the opportunities and challenges of utilizing immersive analytics to analyze AV studies.Item Open Access Expanding the User Interactions and Design Process of Haptic Experiences in Virtual Reality(2023-08) Smith, Christopher Geoffrey; Sharlin, Ehud; Somanath, Sowmya; Suzuki, Ryo; Sharlin, Ehud; Somanath, Sowmya; Suzuki, Ryo; Zhao, RichardVirtual reality can be a highly immersive experience due to its realistic visual presentation. This immersive state is useful for applications including education, training, and entertainment. To enhance the state of immersion provided by virtual reality further, devices capable of simulating touch and force have been researched to allow not only a visual and audio experience but a haptic experience as well. Such research has investigated many approaches to generating haptics for virtual reality but often does not explore how to create an immersive haptic experience using them. In this thesis, we present a discussion on four proposed areas of the virtual reality haptic experience design process using a demonstration methodology. To investigate the application of haptic devices, we designed a modular ungrounded haptic system which was used to create a general-purpose device capable of force-based feedback and used it in the three topics of exploration. The first area explored is the application of existing haptic theory for aircraft control to the field of virtual reality drone control. The second area explored is the presence of the size-weight sensory illusion within virtual reality when using a simulated haptic force. The third area explored is how authoring within a virtual reality medium can be used by a designer to create VR haptic experiences. From these explorations, we begin a higher-level discussion of the broader process of creating a virtual reality haptic experience. Using the results of each project as a representation of our proposed design steps, we discuss not only the broader concepts the steps contribute to the process and their importance, but also draw connections between them. By doing this, we present a more holistic approach to the large-scale design of virtual reality haptic experiences and the benefits we believe it provides.Item Open Access Exploring Tabletops as an Interaction Medium in the Context of Reservoir Engineering(2012-10-03) Somanath, Sowmya; Costa Sousa, Mário; Sharlin, EhudDigital tabletops are powerful interaction mediums. As a virtual medium their computational capabilities allow the user to digitally explore, transform and embellish the input content to gain further insights about the pertinent information. On the other hand, as a physical medium, their form factor inherently supports collaboration and presents opportunities to place other physical objects atop to assist and enhance the exploration experience. In this thesis, we propose the use of tabletops as an interaction medium to explore reservoir post-processing flow simulation models using virtual content - visualizations and a physical agent - Spidey: a tabletop robotic assistant. We discuss results that emerged from evaluating each of the prototypes, presenting the potential of each of these concepts and their applicability to the domain of reservoir engineering. With the Spidey testbed we explored the notion of proxemics between a user and a robotic tabletop assistant and performed a user study in which participants interacted with Spidey. Thus in the results, we also discuss the proxemics results reflecting on the interaction between people and tabletop robots.Item Open Access Exploring the Design of Autonomous Vehicle-Pedestrian Interaction(2019-09-12) Mahadevan, Karthik; Sharlin, Ehud; Somanath, SowmyaAutonomous vehicle research today places an emphasis on developing better sensors and algorithms to enable the vehicle to localize itself in the environment, plan routes, and control its movement. Surveying the general public reveals optimism about the technology but also some skepticism about its ability to communicate with vulnerable road users such as pedestrians and cyclists. In today's interaction with vehicles at crosswalks, pedestrians rely on cues originating from the vehicle and the driver. Vehicle cues relate to its kinematics such as speed and stopping distance while driver cues are concerned with communication such as eye gaze and contact, head and body movement, and hand gestures. In autonomous vehicles, however, a driver is not expected to be on-board to provide cues to pedestrians. We attempted to tackle the problem of designing novel ways to facilitate autonomous vehicle-pedestrian interaction at crosswalks. We propose interfaces which communicate an autonomous vehicle's awareness and intent as a means of helping pedestrians make safe crossing decisions. Through our exploration, we make several contributions. First, we propose a design space for building interfaces using different cue modalities and cue locations. From an early exploration of this design space, we prototype interfaces designed to facilitate autonomous vehicle-pedestrian interaction. The interaction between vehicles and pedestrians will become more challenging during the transition period until all vehicles on the road are fully autonomous. During this period which we term mixed traffic, vehicles of varying levels of autonomy will occupy roads, some of which will have drivers, others such as semi-autonomous which may have distracted drivers, and fully autonomous vehicles which may or may not have drivers. To study this problem, we contribute a virtual reality-based pedestrian simulator. Our final contribution relates to the evaluation of interfaces in the real and virtual world where we found their inclusion helped pedestrians make safe crossing decisions.Item Open Access Exploring the Experience of Becoming and Unbecoming a Cyborg Using Performing Arts Techniques(2019-10-22) Hammad, Noor; Somanath, Sowmya; Sharlin, Ehud; Finn, PatrickThe project proposes using performing arts techniques to aid people in becoming and unbecoming a cyborg. Cyborgs are human-machine hybrids with organic and mechatronic body parts which can be implanted or worn. The transition into and out of experiencing additional body parts is not fully understood. This project draws from techniques used by actors for their performances to facilitate the experience of becoming and unbecoming a cyborg. A study where actors entered a cyborg state, performed as a cyborg, and then exited from that cyborg state was conducted. The observations suggest that these techniques can be useful in technology augmented experiences. Furthermore, to translate the lessons learned from the actor study to a cyborg user context, a design session was formulated and conducted and the data was analyzed. The results of this design session informed the specification of a prospective prototype that supports the performing arts techniques. Finally, a discussion of the project limitations as well as future work is presented.Item Open Access Making despite Material Constraints with Augmented Reality-Mediated Prototyping(2017-09-28) Somanath, Sowmya; Oehlberg, Lora; Sharlin, EhudMakers build physical interactive objects using programmable electronics. However, makers often must stop building when material resources (e.g., electronic components) are not immediately or easily available. In this paper, we envision Augmented Reality (AR)-mediated prototyping as a way for makers to continue building physical computing projects despite a lack of electronic components. AR-mediated prototyping enables makers to build, program, interact with, and iterate on physical computing projects that combine both real and stand-in virtual electronic components. We designed and implemented a technology probe, Polymorphic Cube (PMC), as an instantiation of our vision. We introduced PMC to twelve makers and asked them to build four simple lamp prototypes despite missing I/O components (push button, LED, light sensor, servo). Our results show that PMC helped participants prototype despite missing components, and highlighted how AR-mediated prototyping extends to exploring project ideas, tinkering with implementation, and making with others.Item Open Access 'Making' within Material, Cultural, and Emotional Constraints(2017) Somanath, Sowmya; Sharlin, Ehud; Costa Sousa, Mário; Oehlberg, Lora; Hughes, Janette; Parlac, Vera; Meruvia Pastor, OscarThe Maker Movement aims to democratize technological practices and promises many benefits for people including improved technical literacy, a means for self-expression and agency, and an opportunity to become more than consumers of technology. As part of the Maker Movement, people build hobbyist and utilitarian projects by themselves using programmable electronics (e.g., microcontroller, sensors, actuators) and software tools. While the Maker Movement is gaining momentum globally, some people are left out. Constraints such as material limitations, educational culture restrictions, and emotional or behavioral difficulties can often limit people from taking part in the Maker Movement. We refer to the systematic investigation of how diverse people respond to making-centered activities within constraints as an exploration of making within constraints. In this dissertation, we (1) study how people respond to creating physical objects by themselves within constraints and, (2) investigate how to design technology that can help makers within constraints. We conducted an observational study in an impoverished school in India and identified the students' challenges and their strategies for making within material and educational culture constraints. We conducted a second study with at-promise youth in Canada and identified a set of lessons learned to engage youth within emotional and behavioral constraints in making-centered activities. Leveraging our observations, we proposed Augmented Reality (AR)-mediated prototyping as a way to address material constraints. AR-mediated prototyping can help makers to build, program, interact with and iterate on physical computing projects that combine both real-world and stand-in virtual electronic components. We designed, implemented, and evaluated a technology probe, Polymorphic Cube (PMC), as an instance of our vision. Our results show that PMC helped participants prototype despite missing I/O electronic components, and highlighted how AR-mediated prototyping extends to exploring project ideas, tinkering with implementation, and making with others. Informed by our empirical and design explorations, we suggest a set of characteristics of constraints and implications for designing future technologies for makers within constraints. In the long-term, we hope that this research will inspire interaction designers to develop new tools that can help resolve constraints for making.Item Open Access Redesigning instructional tools for novice makers(2020-08-04) Stark, Jessica Theresa; Sharlin, Ehud; Tang, Anthony; Somanath, Sowmya; Anderson, FraserPeople new to Making often struggle when instructional materials, such as written documentation or video tutorials, do not address certain difficulties they encounter. This thesis explores two examples of instructional materials for novice makers which have been redesigned to help them overcome these difficulties. The first example focuses on written documentation, often created by Makers who document their own projects, providing instructions based on tools and materials the author has which may not be available to the reader. To address this, I present MakeAware, a system designed to support situation awareness and help users make decisions about which tools and materials they can use to accomplish a task. For example, if the documentation instructs the maker to attach two pieces of the project together using screws and a screwdriver but the maker does not have these, MakeAware can help them improvise by guiding them to use a hammer and nails. The second example focuses on video tutorials which are often filmed from a single camera and cannot always capture the necessary information to understand the task. To address this, I designed a multi-camera playback interface that allows viewers to change angles during the tutorial, circumventing the visibility problems that can occur from providing only one angle. For example, in a tutorial about assembling a laser-cut lantern, the instructor’s hands often cover the fasteners between the pieces so the viewer cannot see how the attachment is made. By selecting an angle that shows the fasteners in the foreground instead of the hands when needed, the viewer may be able to better understand the process. I evaluated these interfaces in separate user studies with novice makers as participants. The first study showed that the documentation style of MakeAware helped novice makers customize the instructions to their needs. The second study showed that a video interface providing flexible perspectives could be helpful in avoiding visibility problems while highlighting that this approach may be suited to non-repetitive or asymmetrical tasks. The results of both studies provide design implications that can help us accommodate the needs of novice makers through written documentation and video tutorials.Item Open Access Spidey: a Robotic Tabletop Assistant(2012-02-23T21:43:25Z) Somanath, Sowmya; Sharlin, Ehud; Costa Sousa, MarioThis paper presents our efforts of exploring the possibilities of combining tabletop robots and assistant robots. Our paper presents the design and prototyping of Spidey, a robotic assistant on a tabletop environment which works together as a team member with its human companions, aware of their tabletop actions and reacting or initiating tabletop actions according to the task requirements. Spidey is designed both as a proof of concept, suggesting the benefits, and reflecting on the limitations of a robotic assistant in an interactive reservoir engineering tabletop visualization application we are implementing. This paper motivates our concept of a robotic tabletop assistant, and outlines our design efforts and the current Spidey prototype