Browsing by Author "Tang, A."
Now showing 1 - 20 of 20
Results Per Page
Sort Options
Item Metadata only Body-Centric Interaction: Using the Body as an Extended Mobile Interaction Space.(2011) Chen, X.; Tang, A.; Boring, S.; Greenberg, S.Item Metadata only Constructive Visualization(ACM, 2014) Huron, S.; Carpendale, S.; Thudt, A.; Tang, A.; Mauerer, M.If visualization is to be democratized, we need to provide a means for non-experts to create visualizations that allow them to engage with datasets. We present constructive visualization as a new paradigm for the simple creation of flexible, dynamic visualizations. Constructive visualization is simple---that skills required to build and manipulate the visualizations are akin to kindergarten play; it is expressive---one can build within the constraints of the chosen environment, and it also supports dynamics as these visualizations can be rebuilt and adjusted. We describe the conceptual components and processes underlying constructive visualization, and describe real-world examples to illustrate the utility of this approach. The constructive visualization approach builds on our inherent understanding and experience with physical building blocks, the model enables non-experts to create entirely novel visualizations, and to engage with datasets in a manner that would not have otherwise been possible.Item Metadata only Exploring Video Streaming in Public Settings: Shared Geocaching Over Distance Using Mobile Video Chat(ACM, 2014) Procyk, J.; Neustaedter, C.; Pang, C.; Tang, A.; Judge, T. K.Our research explores the use of mobile video chat in public spaces by people participating in parallel experiences, where both a local and remote person are doing the same activity together at the same time. We prototyped a wearable video chat experience and had pairs of friends and family members participate in 'shared geocaching' over distance. Our results show that video streaming works best for navigation tasks but is more challenging to use for fine-grained searching tasks. Video streaming also creates a very intimate experience with a remote partner, but this can lead to distraction from the 'real world' and even safety concerns. Overall, privacy concerns with streaming from a public space were not typically an issue; however, people tended to rely on assumptions of what were acceptable. The implications are that designers should consider appropriate feedback, user disembodiment, and asymmetry when designing for parallel experiences.Item Metadata only The Fat Thumb: Using the Thumb's Contact Size for Single-Handed Mobile Interaction.(ACM, 2012) Boring, S.; Ledo, D.; Chen, X.; Marquardt, N.; Tang, A.; Greenberg, S.Modern mobile devices allow a rich set of multi-finger interactions that combine modes into a single fluid act, for example, one finger for panning blending into a two-finger pinch gesture for zooming. Such gestures require the use of both hands: one holding the device while the other is interacting. While on the go, however, only one hand may be available to both hold the device and interact with it. This mostly limits interaction to a single-touch (i.e., the thumb), forcing users to switch between input modes explicitly. In this paper, we contribute the Fat Thumb interaction technique, which uses the thumb's contact size as a form of simulated pressure. This adds a degree of freedom, which can be used, for example, to integrate panning and zooming into a single interaction. Contact size determines the mode (i.e., panning with a small size, zooming with a large one), while thumb movement performs the selected mode. We discuss nuances of the Fat Thumb based on the thumb's limited operational range and motor skills when that hand holds the device. We compared Fat Thumb to three alternative techniques, where people had to precisely pan and zoom to a predefined region on a map and found that the Fat Thumb technique compared well to existing techniques.Item Metadata only The Fat Thumb: Using the Thumb’s Contact Size for Single-Handed Mobile Interaction(2011) Boring, S.; Ledo, D.; Chen, X.; Tang, A.; Greenberg, S.Item Open Access From Awareness to HCI Education: The CHI'2005 Workshop Papers Suite(2005-03-16) Greenberg, S.; McEwan, G.; Neustaedter, C.; Elliot, K.; Tang, A.These four papers are a suite of articles presented at workshops (listed in the individual citations) held at the ACM CHI 2005 conference, April 2005. Saul Greenberg. (2005) HCI Graduate Education in a Traditional Compute Science Department. ACM CHI 2005 Workshop on Graduate Education in Human- Computer Interaction. Organized by Beaudouin-Lafon, M., Foley, J., Grudin, J., Hudson, S., Hollan, J., Olson, J. and Verplank, B. Gregor McEwan and Saul Greenberg. (2005) Community Bar: Designing for Awareness and Interaction. ACM CHI 2005 Workshop on Awareness systems: Known Results, Theory, Concepts and Future Challenges. Organized by Panos Markopoulos, de Ruyter, Boris, and Mackay, Wendy. Carman Neustaedter, Kathryn Elliot and Saul Greenberg. (2005) Understanding Interpersonal Awareness in the Home. ACM CHI 2005 Workshop on Awareness systems: Known Results, Theory, Concepts and Future Challenges. Organized by Panos Markopoulos, de Ruyter, Boris, and Mackay, Wendy. Anthony Tang and Saul Greenberg. (2005) Supporting Awareness in Mixed Presence Groupware. ACM CHI 2005 Workshop on Awareness systems: Known Results, Theory, Concepts and Future Challenges. Organized by Panos Markopoulos, de Ruyter, Boris, and Mackay, Wendy.Item Metadata only From Focus to Context and Back: Combining Mobile Projectors and Stationary Displays(University of Calgary, 2012) Weigel, M.; Tang, A.; Boring, S.; Marquardt, N.; Greenberg, S.Item Metadata only From Focus to Context and Back: Combining Mobile Projectors and Stationary Displays(2013) Weigel, M.; Boring, S.; Steimel, J.; Tang, A.; Greenberg, S.Item Metadata only In-place Annotation of Physical Objects with Pico-Projectors(2013) Tang, R.; Tang, A.Item Metadata only KinectArms: a Toolkit for Capturing and Displaying Arm Embodiments in Distributed Tabletop Groupware.(ACM, 2013) Genest, A. M.; Gutwin, C.; Tang, A.; Kalyn, M.; Ivkovic, Z.Gestures are a ubiquitous part of human communication over tables, but when tables are distributed, gestures become difficult to capture and represent. There are several problems: extracting arm images from video, representing the height of the gesture, and making the arm embodiment visible and understandable at the remote table. Current solutions to these problems are often expensive, complex to use, and difficult to set up. We have developed a new toolkit - KinectArms - that quickly and easily captures and displays arm embodiments. KinectArms uses a depth camera to segment the video and determine gesture height, and provides several visual effects for representing arms, showing gesture height, and enhancing visibility. KinectArms lets designers add rich arm embodiments to their systems without undue cost or development effort, greatly improving the expressiveness and usability of distributed tabletop groupware.Item Metadata only Mapping out Work in a Mixed Reality Project Room(ACM, 2015) Reilly, D.; Echenique, A.; Wu, A.; Tang, A.; Edwards, K.We present results from a study examining how the physical layout of a project room and task affect the cognitive maps acquired of a connected virtual environment during mixed-presence collaboration. Results indicate that a combination of physical layout and task impacts cognitive maps of the virtual space. Participants did not form a strong model of how different physical work regions were situated relative to each other in the virtual world when the tasks performed in each region differed. Egocentric perspectives of multiple displays enforced by different furniture arrangements encouraged cognitive maps of the virtual world that reflected these perspectives, when the displays were used for the same task. These influences competed or coincided with document-based, audiovisual and interface cues, influencing collaboration. We consider the implications of our findings on WYSIWIS mappings between real and virtual for mixed-presence collaboration.Item Metadata only Mechanics of Camera Work in Mobile Video Collaboration(ACM, 2015) Jones, B.; Witcraft, A.; Tang, A.; Bateman, S.; Neustaedter, C.Mobile video conferencing, where one or more participants are moving about in the real world, enables entirely new interaction scenarios (e.g., asking for help to construct or repair an object, or showing a physical location). While we have a good understanding of the challenges of video conferencing in office or home environments, we do not fully understand the mechanics of camera work-how people use mobile devices to communicate with one another-during mobile video calls. To provide an understanding of what people do in mobile video collaboration, we conducted an observational study where pairs of participants completed tasks using a mobile video conferencing system. Our analysis suggests that people use the camera view deliberately to support their interactions-for example, to convey a message or to ask questions-but the limited field of view, and the lack of camera control can make it a frustrating experience.Item Metadata only OneSpace: Shared Depth-Corrected Video Interaction(2013) Ledo, D.; Aseniero, B. A.; Greenberg, S.; Tang, A.Item Metadata only OneSpace: Shared Depth-Corrected Video Interaction(2013) Ledo, D.; Aseniero, B. A.; Greenberg, S.; Tang, A.Video conferencing commonly employs a video portal met-aphor to connect individuals from remote spaces. In this work, we explore an alternate metaphor, a shared depth-mirror, where video images of two spaces are fused into a single shared, depth-corrected video space. We realize this metaphor in OneSpace, where the space respects virtual spatial relationships between people and objects as if all parties were looking at a mirror together. We report prelim-inary observations of OneSpace's use, noting that it encour-ages cross-site, full-body interactions, and that participants employed the depth cues in their interactions. Based on these observations, we argue that the depth mirror offers new opportunities for shared video interaction in the form of a shared stage.Item Metadata only Physio@Home: Exploring visual guidance and feedback techniques for physiotherapy patients at home(ACM, 2015) Tang, R.; Yang, X.; Tang, A.; Bateman, S.; Jorge, J.Physiotherapy patients exercising at home alone are at risk of re-injury since they do not have corrective guidance from a therapist. To explore solutions to this problem, we designed Physio@Home, a prototype that guides people through pre-recorded physiotherapy exercises using real-time visual guides and multi-camera views. Our design addresses several aspects of corrective guidance, including: plane and range of movement, positions and angles of joints to maintain, and extent of movement. We evaluated our design, comparing how closely participants could follow exercise movements in various feedback conditions. Participants were most accurate when using the visual guide and multi-views. Based on our qualitative findings on the visual complexity of the feedback, we conclude with suggestions for exercise guidance systems.Item Metadata only ProjectorKit: Easing Rapid Prototyping of Interactive Applications for Mobile Projectors(ACM, 2013) Weigel, M.; Boring, S.; Steimle, J.; Marquardt, N.; Greenberg, S.; Tang, A.Researchers have developed interaction concepts based on mobile projectors. Yet pursuing work in this area - particularly in building projector-based interactions techniques within an application - is cumbersome and time-consuming. To mitigate this problem, we contribute ProjectorKit, a flexible open-source toolkit that eases rapid prototyping mobile projector interaction techniques.Item Metadata only Showing Real-time Recommendations to explore the stages of Reflection and Action(2013) Aseniero, B.A.; Tang, A.; Carpendale, S.; Greenberg, S.Item Metadata only SPALENDAR: Visualizing a Group's Calendar Events over a Geographic Space on a Public Display(ACM, 2012) Chen, X.; Boring, S.; Carpendale, S.; Tang, A.; Greenberg, S.Portable paper calendars (i. e., day planners and organizers) have greatly influenced the design of group electronic calendars. Both use time units (hours/days/weeks/etc.) to organize visuals, with useful information (e.g., event types, locations, attendees) usually presented as - perhaps abbreviated or even hidden - text fields within those time units. The problem is that, for a group, this visual sorting of individual events into time buckets conveys only limited information about the social network of people. For example, people's whereabouts cannot be read 'at a glance' but require examining the text. Our goal is to explore an alternate visualization that can reflect and illustrate group members' calendar events. Our main idea is to display the group's calendar events as spatiotemporal activities occurring over a geographic space animated over time, all presented on a highly interactive public display. In particular, our Spalendar (Spatial Calendar) design animates people's past, present and forthcoming movements between event locations as well as their static locations. Detail of people's events, their movements and their locations is progressively revealed and controlled by the viewer's proximity to the display, their identity, and their gestural interactions with it, all of which are tracked by the public display.Item Metadata only STRATOS: Using Visualization to Support Decisions in Strategic Software Release Planning(ACM, 2015) Aseniero, B. A.; Wun, T.; Ledo, D.; Ruhe, G.; Tang, A.; Carpendale, S.Software is typically developed incrementally and released in stages. Planning these releases involves deciding which features of the system should be implemented for each release. This is a complex planning process involving numerous trade-offs-constraints and factors that often make decisions difficult. Since the success of a product depends on this plan, it is important to understand the trade-offs between different release plans in order to make an informed choice. We present STRATOS, a tool that simultaneously visualizes several software release plans. The visualization shows several attributes about each plan that are important to planners. Multiple plans are shown in a single layout to help planners find and understand the trade-offs between alternative plans. We evaluated our tool via a qualitative study and found that STRATOS enables a range of decision-making processes, helping participants decide on which plan is most optimal.Item Metadata only WaaZaam! Supporting Creative Play at a Distance in Customized Video Environments(ACM, 2014) Hunter, S.; Maes, P.; Tang, A.; Inkpen, K.We present the design, implementation and evaluation of WaaZam, a video mediated communication system designed to support creative play in personalized environments. Users can interact together in sets composed of digital assets layered in 3D space. The goal of the project is to support creative play and increase social engagement during video sessions of geographically separated families. We focus particularly on understanding the value of customization for families with children ages 6-12. We present interviews with creativity experts, a pilot study and a formal evaluation of families playing together in four conditions: separate windows, merged windows, digital play sets, and personalized digital environments. We found that playing in the same video space enables new activities and increases social engagement for families. Personalization allows families to customize environments for their needs and supports more creative play activities that embody the imagination of the child.