Browsing by Author "Greenberg, S."
Now showing 1 - 20 of 51
Results Per Page
Sort Options
Item Metadata only Body-Centric Interaction: Using the Body as an Extended Mobile Interaction Space.(2011) Chen, X.; Tang, A.; Boring, S.; Greenberg, S.Item Open Access Collected Posters from the Nectar Annual General Meeting.(2008-01-07) Greenberg, S.; Brush, A. J.; Carpendale, S.; Diaz-Marion, R.; Elliot, K.; Gutwin, C.; McEwan, G.; Neustaedter, C.; Nunes, M.; Smale, S.; Tee, K.This report collects eight posters produced by students and associates of the Grouplab Research Group (Dept. Computer Science, University of Calgary) for the NSERC Nectar Annual General Meeting, held after the ACM CSCW Conference in November, 2006, Banff.Item Metadata only The Continuous Interaction Space: Interaction Techniques Unifying Touch and Gesture On and Above a Digital Surface(Springer, 2013) Marquardt, N.; Jota, R.; Greenberg, S.; Jorge, J.The rising popularity of digital table surfaces has spawned considerable interest in new interaction techniques. Most interactions fall into one of two modalities: 1) direct touch and multi-touch (by hand and by tangibles) directly on the surface, and 2) hand gestures above the surface. The limitation is that these two modalities ignore the rich interaction space between them. To move beyond this limitation, we first contribute a unification of these discrete interaction modalities called the continuous interaction space. The idea is that many interaction techniques can be developed that go beyond these two modalities, where they can leverage the space between them. That is, we believe that the underlying system should treat the space on and above the surface as a continuum, where a person can use touch, gestures, and tangibles anywhere in the space and naturally move between them. Our second contribution illustrates this, where we introduce a variety of interaction categories that exploit the space between these modalities. For example, with our Extended Continuous Gestures category, a person can start an interaction with a direct touch and drag, then naturally lift off the surface and continue their drag with a hand gesture over the surface. For each interaction category, we implement an example (or use prior work) that illustrates how that technique can be applied. In summary, our primary contribution is to broaden the design space of interaction techniques for digital surfaces, where we populate the continuous interaction space both with concepts and examples that emerge from considering this space as a continuum.Item Metadata only Cross-Device Interaction via Micro-mobility and F-formations(ACM, 2012) Marquardt, N.; Hinckley, K.; Greenberg, S.GroupTogether is a system that explores cross-device interaction using two sociological constructs. First, F-formations concern the distance and relative body orientation among multiple users, which indicate when and how people position themselves as a group. Second, micromobility describes how people orient and tilt devices towards one another to promote fine-grained sharing during co-present collaboration. We sense these constructs using: (a) a pair of overhead Kinect depth cameras to sense small groups of people, (b) low-power 8GHz band radio modules to establish the identity, presence, and coarse-grained relative locations of devices, and (c) accelerometers to detect tilting of slate devices. The resulting system supports fluid, minimally disruptive techniques for co-located collaboration by leveraging the proxemics of people as well as the proxemics of devices.Item Metadata only Dark Patterns in Proxemic Interactions: A Critical Perspective(ACM, 2014) Greenberg, S.; Boring, S.; Vermeulen, J.; Dostal, J.Proxemics theory explains peoples' use of interpersonal distances to mediate their social interactions with others. Within Ubicomp, proxemic interaction researchers argue that people have a similar social understanding of their spatial relations with nearby digital devices, which can be exploited to better facilitate seamless and natural interactions. To do so, both people and devices are tracked to determine their spatial relationships. While interest in proxemic interactions has increased over the last few years, it also has a dark side: knowledge of proxemics may (and likely will) be easily exploited to the detriment of the user. In this paper, we offer a critical perspective on proxemic interactions in the form of dark patterns: ways proxemic interactions can be misused. We discuss a series of these patterns and describe how they apply to these types of interactions. In addition, we identify several root problems that underlie these patterns and discuss potential solutions that could lower their harmfulness.Item Metadata only The Dark Patterns of Proxemic Sensing(IEEE, 2014) Boring, S.; Greenberg, S.; Vermeulen, J.; Dostal, J.; Marquardt, N.To be accepted and trusted by the public, proxemic sensing systems must respect people's conception of physical space, make it easy to opt in or out, and benefit users as well as advertisers and other vendors.Item Open Access THE DESIGN AND EVOLUTION OF TURBO TURTLE, A COLLABORATIVE MICROWORLD FOR EXPLORING NEWTONIAN PHYSICS(1995-03-01) Cockburn, A.; Greenberg, S.This paper describes the evolution and on-going development of TurboTurtle, a dynamic multi-user microworld for the exploration of Newtonian physics. With TurboTurtle, students can alter the attributes of the simulation environment, such as gravity, friction, and presence or absence of walls. They can also manipulate the "turtle" (a movable ball) directly. Students can adjust its position, velocity and mass; change its kinetic and potential energy; and apply a force to it by strapping a rocket to its back. Students explore the microworld by manipulating these parameters, and learn concepts by studying the behaviours and interactions that occur. TurboTurtle has gone through three major evolutions. It began as a rudimentary command line extension to Logo, and became a dynamic simulation environment driven by a graphical user interface. Most recently, TurboTurtle has become "group-aware", where several students, each on their own computer, can simultaneous control the microworld and gesture around the shared display. In this final version, teachers can add structure to the group's activities by setting the simulation environment to an interesting state, which includes a set of problems and questions. The rationale behind the major design decisions in each step are presented. We also discuss the technical aspects of making TurboTurtle group-aware with a groupware toolkit called GroupKit.Item Metadata only Designing the Car iWindow: Exploring Interaction through Vehicle Side Windows(ACM, 2013) Li, J.; Sharlin, E.; Greenberg, S.; Rounding, M.Interactive vehicle windows can enrich the commuting experience by being informative and engaging, strengthening the connection between passengers and the outside world. We propose a preliminary interaction paradigm to allow rich and un-distracting interaction experience on vehicle side windows. Following this paradigm we present a prototype, the Car iWindow, and discuss our preliminary design critique of the interaction, based on the installation of the iWindow in a car and interaction with it while commuting around our campus.Item Metadata only Designing User-, Hand-, and Handpart-Aware Tabletop Interactions with the TOUCHID Toolkit(ACM, 2011) Marquardt, N.; Diaz-Marino, R.; Boring, S.; Greenberg, S.Recent work in multi-touch tabletop interaction introduced many novel techniques that let people manipulate digital content through touch. Yet most only detect touch blobs. This ignores richer interactions that would be possible if we could identify (1) which part of the hand, (2) which side of the hand, and (3) which person is actually touching the surface. Fiduciary-tagged gloves were previously introduced as a simple but reliable technique for providing this information. The problem is that its low-level programming model hinders the way developers could rapidly explore new kinds of user- and handpart-aware interactions. We contribute the TouchID toolkit to solve this problem. It allows rapid prototyping of expressive multi-touch interactions that exploit the aforementioned characteristics of touch input. TouchID provides an easy-to-use event-driven API as well as higher-level tools that facilitate development: a glove configurator to rapidly associate particular glove parts to handparts; and a posture configurator and gesture configurator for registering new hand postures and gestures for the toolkit to recognize. We illustrate TouchID's expressiveness by showing how we developed a suite of techniques that exploits knowledge of which handpart is touching the surface.Item Metadata only The Fat Thumb: Using the Thumb's Contact Size for Single-Handed Mobile Interaction.(ACM, 2012) Boring, S.; Ledo, D.; Chen, X.; Marquardt, N.; Tang, A.; Greenberg, S.Modern mobile devices allow a rich set of multi-finger interactions that combine modes into a single fluid act, for example, one finger for panning blending into a two-finger pinch gesture for zooming. Such gestures require the use of both hands: one holding the device while the other is interacting. While on the go, however, only one hand may be available to both hold the device and interact with it. This mostly limits interaction to a single-touch (i.e., the thumb), forcing users to switch between input modes explicitly. In this paper, we contribute the Fat Thumb interaction technique, which uses the thumb's contact size as a form of simulated pressure. This adds a degree of freedom, which can be used, for example, to integrate panning and zooming into a single interaction. Contact size determines the mode (i.e., panning with a small size, zooming with a large one), while thumb movement performs the selected mode. We discuss nuances of the Fat Thumb based on the thumb's limited operational range and motor skills when that hand holds the device. We compared Fat Thumb to three alternative techniques, where people had to precisely pan and zoom to a predefined region on a map and found that the Fat Thumb technique compared well to existing techniques.Item Metadata only The Fat Thumb: Using the Thumb’s Contact Size for Single-Handed Mobile Interaction(2011) Boring, S.; Ledo, D.; Chen, X.; Tang, A.; Greenberg, S.Item Open Access From Awareness to HCI Education: The CHI'2005 Workshop Papers Suite(2005-03-16) Greenberg, S.; McEwan, G.; Neustaedter, C.; Elliot, K.; Tang, A.These four papers are a suite of articles presented at workshops (listed in the individual citations) held at the ACM CHI 2005 conference, April 2005. Saul Greenberg. (2005) HCI Graduate Education in a Traditional Compute Science Department. ACM CHI 2005 Workshop on Graduate Education in Human- Computer Interaction. Organized by Beaudouin-Lafon, M., Foley, J., Grudin, J., Hudson, S., Hollan, J., Olson, J. and Verplank, B. Gregor McEwan and Saul Greenberg. (2005) Community Bar: Designing for Awareness and Interaction. ACM CHI 2005 Workshop on Awareness systems: Known Results, Theory, Concepts and Future Challenges. Organized by Panos Markopoulos, de Ruyter, Boris, and Mackay, Wendy. Carman Neustaedter, Kathryn Elliot and Saul Greenberg. (2005) Understanding Interpersonal Awareness in the Home. ACM CHI 2005 Workshop on Awareness systems: Known Results, Theory, Concepts and Future Challenges. Organized by Panos Markopoulos, de Ruyter, Boris, and Mackay, Wendy. Anthony Tang and Saul Greenberg. (2005) Supporting Awareness in Mixed Presence Groupware. ACM CHI 2005 Workshop on Awareness systems: Known Results, Theory, Concepts and Future Challenges. Organized by Panos Markopoulos, de Ruyter, Boris, and Mackay, Wendy.Item Open Access FROM AWARENESS TO TEAMROOMS, GROUPWEB AND TURBOTURTLE: EIGHT SNAPSHOTS OF RECENT WORK IN THE GROUPLAB PROJECT(1995-12-01) Greenberg, S.; Gutwin, C.; Cockburn, A.; Roseman, M.This report contains eight short papers that serve as snapshots of recent work by members and collaborators of the GroupLab team. All papers are concerned with groupware, and all but one of the systems described were implemented using GroupKit, our groupware toolkit. The first five papers are a suite of articles that considers how awareness of others can be supported in groupware systems. The papers cover theoretical considerations of awareness (#2), practical efforts in building systems and widgets to support awareness (#1,3,4) and evaluation of widgets to determine their effectiveness and usability (#5). Suite Overview: Supporting Awareness of Others in Groupware 1. Peepholes: Low Cost Awareness of One's community 2. Workspace Awareness for Groupware 3. Workspace Awareness Support With Radar Views 4. A Fisheye Text Editor for Relaxed-WYSIWIS Groupware 5. A Usability Study of Workspace Awareness Widgets The next three papers cover individual projects. TeamRooms (#6) is a groupware equivalent of a physical team room. Group members can stock the room with applications, and can enter the room at any time to continue their work individually or collectively. GroupWeb (#7) is a World Wide Web browser that is group-aware. People can share their views of pages in real time, can gesture around it with telepointers, and can add group annotations to a page with a groupware editor. TurboTurtle (#8) is a microworld for Newtonian physics designed for children. Children were observed using TurboTurtle, and their collaboration styles are analyzed. 6. TeamRooms: Groupware for Shared Electronic Spaces 7. GroupWeb: A WWW Browser as Real Time Groupware 8. Children's Collaboration Styles in a Newtonian MicroWorldItem Metadata only From Focus to Context and Back: Combining Mobile Projectors and Stationary Displays(University of Calgary, 2012) Weigel, M.; Tang, A.; Boring, S.; Marquardt, N.; Greenberg, S.Item Metadata only From Focus to Context and Back: Combining Mobile Projectors and Stationary Displays(2013) Weigel, M.; Boring, S.; Steimel, J.; Tang, A.; Greenberg, S.Item Open Access A Gameroom of Our Own: Exploring the Domestic Gaming Environment(2010-07-02) Voida, A; Greenberg, S.Digital gaming plays out within different environments—from arcades to virtual worlds to the family living room. Each of these gaming environments offer different constraints and affordances for gaming. As gaming environments change, so do the kinds of games people play, the populations of gamers that gather, and the social interactions surrounding gaming. In this paper, we explore the domestic gaming environment. We examine data that suggests that the domestic environment is now the most common environment for gaming. We characterize existing domestic gaming environments and contrast these gaming environments with participants visions of their ideal gaming environment; these findings suggest that the participants in this study wanted gaming environments that would embody a technologically mediated hospitality.Item Metadata only A Gameroom of Our Own: Exploring The Domestic Gaming Environment.(2010) Voida, A.; Greenberg, S.Item Metadata only Gradual Engagement between Digital Devices as a Function of Proximity: From Awareness to Progressive Reveal to Information Transfer(ACM, 2012) Marquardt, N.; Ballendat, T.; Boring, S.; Greenberg, S.; Hinckley, K.The increasing number of digital devices in our environment enriches how we interact with digital content. Yet, cross-device information transfer -- which should be a common operation -- is surprisingly difficult. One has to know which devices can communicate, what information they contain, and how information can be exchanged. To mitigate this problem, we formulate the gradual engagement design pattern that generalizes prior work in proxemic interactions and informs future system designs. The pattern describes how we can design device interfaces to gradually engage the user by disclosing connectivity and information exchange capabilities as a function of inter-device proximity. These capabilities flow across three stages: (1) awareness of device presence/connectivity, (2) reveal of exchangeable content, and (3) interaction methods for transferring content between devices tuned to particular distances and device capabilities. We illustrate how we can apply this pattern to design, and show how existing and novel interaction techniques for cross-device transfers can be integrated to flow across its various stages. We explore how techniques differ between personal and semi-public devices, and how the pattern supports interaction of multiple users.Item Open Access GROUPWARE TOOLKITS FOR SYNCHRONOUS WORK(1996-10-01) Greenberg, S.; Roseman, M.Groupware toolkits let developers build applications for synchronous and distributed computer-based conferencing. This chapter describes four components that we believe toolkits must provide. A run-time architecture automatically manages the creation, interconnection, and communications of both centralized and distributed processes that comprise conference sessions. A set of groupware programming abstractions allows developers to control the behaviour of distributed processes, to take action on state changes, and to share relevant data. Groupware widgets let interface features of value to conference participants be added easily to groupware applications. Session managers let people create and manage their meetings and are built by developers to accommodate the group's working style. We illustrate the many ways these components can be designed by drawing on our own experiences with GroupKit, and by reviewing approaches taken by other toolkit developers.Item Metadata only The HapticTouch Toolkit: Enabling Exploration of Haptic Interactions(ACM, 2012) Ledo, D.; Nacenta, M.; Marquardt, N.; Boring, S.; Greenberg, S.In the real world, touch based interaction relies on haptic feedback (e.g., grasping objects, feeling textures). Unfortunately, such feedback is absent in current tabletop systems. The previously developed Haptic Tabletop Puck (HTP) aims at supporting experimentation with and development of inexpensive tabletop haptic interfaces in a do-it-yourself fashion. The problem is that programming the HTP (and haptics in general) is difficult. To address this problem, we contribute the HapticTouch toolkit, which enables developers to rapidly prototype haptic tabletop applications. Our toolkit is structured in three layers that enable programmers to: (1) directly control the device, (2) create customized combinable haptic behaviors (e.g., softness, oscillation), and (3) use visuals (e.g., shapes, images, buttons) to quickly make use of these behaviors. In our preliminary exploration we found that programmers could use our toolkit to create haptic tabletop applications in a short amount of time.
- «
- 1 (current)
- 2
- 3
- »