Browsing by Author "Ledo, David"
Now showing 1 - 14 of 14
Results Per Page
Sort Options
Item Metadata only Application Programming Interface (API) for the Haptic Tabletop Puck(5th Annual Students’ Union Undergraduate Research Symposium, 2010) Ledo, David; Marquardt, Nicolai; Nacenta, Miguel A.; Greenberg, SaulItem Open Access Astral: Prototyping Mobile and IoT Interactive Behaviours via Streaming and Input Remapping(2018-07) Ledo, David; Vermeulen, Jo; Carpendale, Sheelagh; Greenberg, Saul; Oehlberg, Lora A.; Boring, SebastianWe present Astral, a prototyping tool for mobile and Internet of Things interactive behaviours that streams selected desktop display contents onto mobile devices (smartphones and smartwatches) and remaps mobile sensor data into desktop input events (i.e., keyboard and mouse events). Interactive devices such as mobile phones, watches, and smart objects, offer new opportunities for interaction design– yet prototyping their interactive behaviour remains an implementation challenge. Additionally, current tools often focus on systems responding after an action takes place as opposed to while the action takes place. With Astral, designers can rapidly author interactive prototypes live on mobile devices through familiar desktop applications. Designers can also customize input mappings using easing functions to author, fine-tune and assess rich outputs. We demonstrate the expressiveness of Astral through a set of prototyping scenarios with novel and replicated examples from past literature which reflect how the system might support and empower designers throughout the design process.Item Metadata only Authorship in Art/Science Collaboration is Tricky(2013) MacDonald, Lindsay; Ledo, David; Nacenta, Miguel; Brosz, John; Carpendale, SheelaghItem Open Access Designing User-, Hand-, and Handpart-Aware Tabletop Interactions with the TOUCHID Toolkit(2011-07-12T20:07:47Z) Marquardt, Nicolai; Kiemer, Johannes; Ledo, David; Boring, Sebastian; Greenberg, SaulRecent work in multi-touch tabletop interaction introduced many novel techniques that let people manipulate digital content through touch. Yet most only detect touch blobs. This ignores richer interactions that would be possible if we could identify (1) which hand, (2) which part of the hand, (3) which side of the hand, and (4) which person is actually touching the surface. Fiduciary-tagged gloves were previously introduced as a simple but reliable technique for providing this information. The problem is that its lowlevel programming model hinders the way developers could rapidly explore new kinds of user- and handpartaware interactions. We contribute the TOUCHID toolkit to solve this problem. It allows rapid prototyping of expressive multi-touch interactions that exploit the aforementioned characteristics of touch input. TOUCHID provides an easy-to-use event-driven API. It also provides higher-level tools that facilitate development: a glove configurator to rapidly associate particular glove parts to handparts; and a posture configurator and gesture configurator for registering new hand postures and gestures for the toolkit to recognize. We illustrate TOUCHID’s expressiveness by showing how we developed a suite of techniques (which we consider a secondary contribution) that exploits knowledge of which handpart is touching the surface.Item Open Access Evaluation Strategies for HCI Toolkit Research(2017-09-27) Ledo, David; Houben, Steven; Vermeulen, Jo; Marquardt, Nicolai; Oehlberg, Lora; Greenberg, SaulToolkit research plays an important role in the field of HCI, as it can heavily influence both the design and implementation of interactive systems. For publication, the HCI community typically expects that research to include an evaluation component. The problem is that toolkit evaluation is challenging, as it is often unclear what ‘evaluating’ a toolkit means and what methods are appropriate. To address this problem, we analyzed 68 published toolkit papers. From that analysis, we provide an overview of, reflection on, and discussion of evaluation methods for toolkit contributions. We identify and discuss the value of four toolkit evaluation strategies, including the associated techniques each employs. We offer a categorization of evaluation strategies for toolkit researchers, along with a discussion of the value, potential biases, and trade-offs associated with each strategy.Item Open Access The Fat Thumb: Using the Thumb's Contact Size for Single-Handed Mobile Interaction(2011-12-02T21:11:14Z) Boring, Sebastian; Ledo, David; Chen, Xiang (Anthony); Marquardt, Nicolai; Tang, Anthony; Greenberg, SaulModern mobile devices allow a rich set of multi-finger interactions that combine modes into a single fluid act, for example, one finger for panning blending into a two-finger pinch gesture for zooming. Such gestures require the use of both hands: one holding the device while the other is interacting. While on the go, however, only one hand may be available to both hold the device and interact with it. This mostly limits interaction to a single-touch (i.e., the thumb), forcing users to switch between input modes explicitly. In this paper, we contribute the Fat Thumb interaction technique, which uses the thumb’s contact size as a form of simulated pressure. This adds a degree of freedom, which can be used, for example, to integrate panning and zooming into a single interaction. Contact size determines the mode (i.e., panning with a small size, zooming with a large one), while thumb movement performs the selected mode. We discuss nuances of the Fat Thumb based on the thumb’s limited operational range and motor skills when that hand holds the device. We compared Fat Thumb to three alternative techniques, where people had to pan and zoom to a predefined region on a map. Participants performed fastest with the least strokes using Fat Thumb.Item Open Access The HAPTIC TOUCH Toolkit: Enabling Exploration of Haptic Interactions(2011-09-26T15:10:23Z) Ledo, David; Nacenta, Miguel A.; Marquardt, Nicolai; Boring, Sebastian; Greenberg, SaulIn the real world, touch based interaction relies on haptic feedback (e.g., grasping objects, feeling textures). Unfortunately, such feedback is absent in current tabletop systems. The previously developed Haptic Tabletop Puck (HTP) aims at supporting experimentation with and development of inexpensive tabletop haptic interfaces. The problem is that programming the HTP is difficult due to interactions when coding its multiple hardware components. To address this problem, we contribute the HAPTICTOUCH toolkit, which allows developers to rapidly prototype haptic tabletop applications. Our toolkit is structured in three layers that enable programmers to: (1) directly control the device, (2) create customized combinable haptic behaviors (e.g. softness, oscillation), and (3) use visuals (e.g., shapes, images, buttons) to quickly make use of the aforementioned behaviors. Our preliminary study found that programmers could use the HAPTICTOUCH toolkit to create haptic tabletop applications in a short amount of time.Item Open Access Is Anyone Looking? Mitigating Shoulder Surfing on Public Displays through Awareness and Protection(2014-03-12) Brudy, Frederik; Ledo, David; Greenberg, Saul; Butz, AndreasDisplays are growing in size, and are increasingly deployed in semi-public and public areas. When people use these public displays to pursue personal work, they expose their activities and sensitive data to passers-by. In most cases, such shoulder-surfing by others is likely voyeuristic vs. a deliberate attempt to steal information. Even so, safeguards are needed. Our goal is to mitigate shoulder-surfing problems in such settings. Our method leverages notions of territoriality and proxemics, where we sense and take action based on the spatial relationships between the passerby, the user of the display, and the display itself. First, we provide participants with awareness of shoulder-surfing moments, which in turn helps both parties regulate their behaviours and mediate further social interactions. Second, we provide methods that protect information when shoulder-surfing is detected. Here, users can move or hide information through easy to perform explicit actions. Alternately, the system itself can mask information from the passerby’s view when it detects shoulder-surfing moments.Item Open Access Is Anyone Looking? – Mediating Shoulder Surfing on Public Displays(2014-01-23) Brudy, Frederik; Ledo, David; Greenberg, SaulWhen a person interacts with a display in an open area, sensitive information becomes visible to shoulder-surfing passersby. While a person’s body shields small displays, shielding is less effective as display area increases. To mitigate this problem, we sense spatial relationships between the passerby, person and display. Awareness of onlookers is provided through visual cues: flashing screen borders, a 3D model mirroring the onlooker’s position and gaze, and an indicator that illustrates their gaze direction. The person can react with a gesture that commands the display to black out personal windows, or to collect them on one side. Alternately, the display will automatically darken screen regions visible by the onlooker, but leaving the display area shielded by the person’s body unaltered (thus allowing the person to continue their actions). The person can also invite the onlooker to collaborate with them via a gesture that reverses these protective mechanisms.Item Open Access Mobile Proxemic Awareness and Control: Exploring the Design Space for Interaction with a Single Appliance(2013-02-01) Ledo, David; Greenberg, SaulComputing technologies continue to grow exponentially every day. However, appliances have become a class of technology that has remained stagnant through time. They are restricted by physical and cost limitations, while also aiming to provide with a lot of functionality. This leads to limited capabilities of input (through multiple buttons and combinations) and output (LEDs, small screens). This video introduces the notion of mobile proxemic awareness and control, whereby a mobile device is used as a medium to reveal of information regarding awareness of presence, state, content and control as a function of proxemics. Through the video, we explore a set of concepts that exploit different proximal distances and levels of information and controls. The video illustrates the concepts with two deliberately simple prototypes: a lamp and a radio alarm clock.Item Open Access OneSpace: Shared Depth-Corrected Video Interaction(2012-12-14) Tang, Anthony; Ledo, David; Aseniero, Bon Adriel; Boring, SebastianVideo conferencing commonly employs a video portal metaphor to connect individuals from remote spaces. In this work, we explore an alternate metaphor, a shared depthmirror, where video images of two spaces are merged into a single shared, depth-corrected video. Just as seeing one’s mirror image causes reflective interaction, the shared video space changes the nature of interaction in the video space. We realize this metaphor in OneSpace, where the space respects virtual spatial relationships between people and objects, and in so doing, encourages cross-site, full-body interactions. We report preliminary observations of OneSpace in use, describing the role of depth in our participants’ interactions. Based on these observations, we argue that the depth mirror offers new opportunities for shared video interaction.Item Metadata only Proxemic-Aware Controls: Designing Remote Controls for Ubiquitous Computing Ecologies(ACM, 2015) Ledo, David; Greenberg, Saul; Marquardt, Nicolai; Boring, SebastianRemote controls facilitate interactions at-a-distance with appliances. However, the complexity, diversity, and increasing number of digital appliances in ubiquitous computing ecologies make it increasingly difficult to: (1) discover which appliances are controllable; (2) select a particular appliance from the large number available; (3) view information about its status; and (4) control the appliance in a pertinent manner. To mitigate these problems we contribute proxemic-aware controls, which exploit the spatial relationships between a person's handheld device and all surrounding appliances to create a dynamic appliance control interface. Specifically, a person can discover and select an appliance by the way one orients a mobile device around the room, and then progressively view the appliance's status and control its features in increasing detail by simply moving towards it. We illustrate proxemic-aware controls of various appliances through various scenarios. We then provide a generalized conceptual framework that informs future designs of proxemic-aware controls.Item Open Access Proxemic-Aware Controls: Designing Remote Controls for Ubiquitous Computing Ecologies(2015-02-18) Ledo, David; Greenberg, Saul; Marquardt, Nicolai; Boring, SebastianRemote controls facilitate interactions at-a-distance with appliances. However, the complexity, diversity, and increasing number of digital appliances in ubiquitous computing ecologies make it increasingly difficult to: (1) discover which appliances are controllable; (2) select a particular appliance from the large number available; (3) view information about its status; and (4) control the appliance in a pertinent manner. To mitigate these problems we contribute proxemic-aware controls, which exploit the spatial relationships between a person’s handheld device and all surrounding appliances to create a dynamic appliance control interface. Specifically, a person can discover and select an appliance by the way one orients a mobile device around the room, and then progressively view the appliance’s status and control its features in increasing detail by simply moving towards it. We illustrate proxemic-aware controls of various appliances through various scenarios. We then provide a generalized conceptual framework that informs future designs of proxemic-aware controls.Item Open Access Remote Control Design for a Ubiquitous Computing Ecology(2015-01-08) Ledo, David; Greenberg, SaulAppliances can facilitate people’s interaction with them by outsourcing their inputs and outputs to remote controls. Remote controls can compensate for constraints in an appliance’s form factor, lessen overall cost, and enable distance interactions. Modern “smart appliances”, which can interconnect with other computational devices, take this one step further: a mobile device can control multiple appliances via custom interfaces with rich interaction capabilities. We foresee ubiquitous computing ecologies, where a room may have myriads of smart appliances all potentially controllable via a mobile device. However, this leads to four problems. It is difficult to: (1) discover which appliances are controllable; (2) select an individual appliance from the ecology; (3) view information about an appliance; and (4) pertinently reveal controls. We mitigate these problems by applying the theoretical concepts of proxemic interaction and gradual engagement to the design of mobile remote controls. In particular, our remote control designs mimic social protocols in which people orient towards and approach one another to mediate interpersonal interactions, except that in our case we mediate person to appliance interaction. This thesis covers and contributes a design exploration and prototype that demonstrates our application of these concepts.