3D User Interfaces. Theory and Practice. Doug A. Bowman. Ernst Kruijff. Joseph J . LaViola, Jr. Ivan Poupyrev. Boston • San Francisco • New York • Toronto •. Request PDF on ResearchGate | On Aug 5, , Doug A. Bowman and others published 3D User Interfaces: Theory and Practice. medical-site.infon - Ebook download as PDF File .pdf), Text File .txt) or read book online. book of user interfaces.
|Language:||English, Spanish, Portuguese|
|Distribution:||Free* [*Register to download]|
3D user interfaces (3D UIs) have a long tradi- tion in several industry has embraced 3D input as well, and new consoles employ Hardware, software, and, in particular, users seem ready to deal with a . He's coauthor of 3D User Interfaces: Theory and Practice (Addison-Wesley Professional, ). He's a member of the. Issuu is a digital publishing platform that makes it simple to publish magazines, catalogs, newspapers, books, and more online. Easily share. overview of research on 3D UIs to date, using our book 3D User Interfaces: Theory and Practice as a guide. Given the limited time, we'll just present a few.
In this chapter, we answer the question, What are 3D user interfaces? We describe the goals of 3D UI design and list some application areas for 3D user interfaces. Keeping these applications in mind as you progress through the book will help provide a concrete reference point for some of the more abstract concepts we discuss.
Modern computer users have become intimately familiar with a specific set of UI components, including input devices such as the mouse and touchscreen, output devices such as the monitor, tablet, or cell phone display, interaction techniques such as drag-and-drop and pinch to zoom, interface widgets such as pull-down menus, and UI metaphors such as the Windows, Icons, Menus, Pointer WIMP desktop metaphor van Dam These interface components, however, are often inappropriate for the nontraditional computing environments and applications under development today.
For example, a virtual reality VR user wearing a fully immersive head-worn display HWD wont be able to see the physical world, making the use of a keyboard impractical. An HWD in an augmented reality AR application may have limited resolution, forcing the redesign of text- intensive interface components such as dialog boxes. A VR application may allow a user to place an object anywhere in 3D space, with any orientation a task for which a 2D mouse is inadequate.
Some of these new components may be simple refinements of existing components; others must be designed from scratch.
In this book, we describe and analyze the components devices, techniques, metaphors that can be used to design 3D user interfaces. We also provide guidance in choosing the components for particular systems based on empirical evidence from published research, anecdotal evidence from colleagues, and personal experience.
Why is the information in this book important? We had five main motivations for producing this book: 3D interaction is relevant to real-world tasks. Interacting in three dimensions makes intuitive sense for a wide range of applications see section 1.
For example, virtual environments VEs can provide users with a sense of presence the feeling of being therereplacing the physical environment with the virtual one , which makes sense for applications such as gaming, training, and simulation. If a user can interact using natural skills, then the application can take advantage of the fact that the user already has a great deal of knowledge about the world.
Also, 3D UIs may be more direct or immediate; that is, there is a short cognitive distance between a users action and the systems feedback that shows the result of that action. This can allow users to build up complex mental models of how a simulation works, for example. The technology behind 3D UIs is becoming mature. UIs for computer applications are becoming more diverse.
Mice, keyboards, windows, menus, and iconsthe standard parts of traditional WIMP interfacesare still prevalent, but nontraditional devices and interface components are proliferating rapidly, and not just on mobile devices.
These components include spatial input devices such as trackers, 3D pointing devices, and whole-hand devices that 36 allow gesture-based input. Multisensory 3D output technologies, such as stereoscopic projection displays, high-resolution HWDs, spatial audio systems, and haptic devices, are also becoming more common, and some of them are now even considered consumer electronics products.
With this technology, a variety of problems have also been revealed. People often find it inherently difficult to understand 3D spaces and to perform actions in free space Herndon et al. Although we live and act in a 3D world, the physical world contains many more cues for understanding and constraints and affordances for action that cannot currently be represented accurately in a computer simulation.
Therefore, great care must go into the design of UIs and interaction techniques for 3D applications. It is clear that simply adapting traditional WIMP interaction styles to 3D does not provide a complete solution to this problem. Rather, novel 3D UIs based on real-world interaction or other metaphors must be developed. Current 3D UIs either are straightforward or lack usability. There are already some applications of 3D UIs used by real people in the real world e.
Most of these applications, however, contain 3D interaction that is not very complex. For example, the interaction in current VR entertainment applications such as VR films is largely limited to rotating the viewpoint i. More complex 3D interfaces for applications such as modeling and design, education, scientific visualization, and psychomotor training are difficult to design and evaluate, often leading to a lack of usability or a low-quality user experience.
While improved technology can help, better technology will not solve the problemfor example, over 40 years of AR technology research have not ensured that todays AR systems are usable. Thus, a more thorough treatment of this subject is needed. Finally, development of 3D UIs is one of the most exciting areas of research in humancomputer interaction HCI today, providing a new frontier for innovation in the field.
A wealth of basic and applied research and development opportunities are available for those with a solid background in 3D interaction.
The technology sector loves acronyms and jargon, and precise terminology can make life easier as long as everyone agrees about the meaning of a particular term.
This book is meant to be accessible to a broad audience, but we still find it useful to employ precise language. Here we present a glossary of some terms that we use throughout the book.
We begin with a set of general terms from the field of HCI that are used in later definitions: A field of study that examines all aspects of the interplay between people and interactive technologies. One way to think about HCI is as the process of communication between human users and computers or interactive technologies in general. Users communicate actions, intents, goals, queries, and other such needs to computers. Computers, in turn, communicate to the user information about the world, about their internal state, about the responses to user queries, and so on.
This communication may involve explicit dialog, or turn-taking, in which a user issues a command or query, the system responds, and so on, but in most modern computer systems, the communication is more implicit, free form, or even imperceptible Hix and Hartson The UI translates a users actions and state inputs into a representation the computer can understand and act upon, and it translates the computers actions and state outputs into a representation the human user can understand and act upon Hix and Hartson A physical hardware device allowing communication from the user to the computer.
The number of independent dimensions of the motion of a body. DOF 38 can be used to describe the input possibilities provided by input devices, the motion of a complex articulated object such as a human arm and hand, or the possible movements of a virtual object. Output devices are also called displays and can refer to the display of any sort of sensory information i. The interaction techniques software component is responsible for mapping the information from the input device or devices into some action within the system and for mapping the output of the system to a form that can be displayed by the output device or devices.
The characteristics of an artifact usually a device, interaction technique, or complete UI that affect the users use of the artifact. There are many aspects of usability, including ease of use, user task performance, user comfort, and system performance Hix and Hartson A broader concept encompassing a users entire relationship with an artifact, including not only usability but also usefulness and emotional factors such as fun, joy, pride of ownership, and perceived elegance of design Hartson and Pyla UX evaluation The process of assessing or measuring some aspects of the user experience of a particular artifact.
Using this HCI terminology, we define 3D interaction and 3D user interface: 3D interaction Humancomputer interaction in which the users tasks are performed directly in a real or virtual 3D spatial context.
Interactive systems that display 3D graphics do not necessarily involve 3D interaction; for example, if a user tours a model of a building on her desktop computer 39 by choosing viewpoints from a traditional menu, no 3D interaction has taken place.
On the other hand, 3D interaction does not necessarily mean that 3D input devices are used; for example, in the same application, if the user clicks on a target object to navigate to that object, then the 2D mouse input has been directly translated into a 3D virtual location; we consider this to be a form of 3D interaction.
In this book, however, we focus primarily on 3D interaction that involves real 3D spatial input such as hand gestures or physical walking. Desktop 3D interaction requires different interaction techniques and design principles. We cover some desktop and multi-touch 3D interaction techniques in Chapters 79 but emphasize interaction with 3D spatial input throughout the book.
Finally, we define some technological areas in which 3D UIs are used: virtual environment VE A synthetic, spatial usually 3D world seen from a first-person point of view. The view in a virtual environment is under the real-time control of the user. Add To My Wish List. Part of the Usability series. Book Your Price: Description Copyright Dimensions: More Information. Unlimited one-month access with your download. See Details.
Request an Instructor or Media review copy. Join Sign In. Probably the best candidate for self-contained 6-DOF tracking is inside-out vision-based tracking, in which the tracked object uses a camera to view the world, and analyzes the changes in this view over time to understand its own motion translations and rotations.
Although this approach is inherently relative, such systems can keep track of "feature points" in the scene to give a sort of absolute tracking in a fixed coordinate system connected with the scene. Three recent tracking developments deserve special mention, as they are bringing many new designers and researchers into the realm of 3D UIs. The first is the Nintendo Wii Remote.
This gaming peripheral does not offer 6-DOF tracking, but does include several inertial sensors in addition to a simple optical tracker that can be used to move a cursor on the screen. Wingrave and colleagues Wingrave et al, presented a nice discussion of how the Wii Remote differs from traditional trackers, and how it can be used in 3D UIs. Second, the Microsoft Kinect Figure 1 delivers tracking in a very different way.
Rather than tracking a handheld device or a single point on the user's head, it uses a depth camera to track the user's entire body a skeleton of about 20 points. The 3-DOF position of each point is measured, but orientation is not detected. And since it tracks the body directly, no "controller" is needed. Researchers have designed some interesting 3D interactions with Kinect e. Copyright terms and licence: Unknown pending investigation.
See section "Exceptions" in the copyright terms below. Figure It has the potential to make 3D interaction a standard part of the desktop computing experience, but we will have to wait and see how best to design interaction techniques for this device.
It will share many of the benefits and drawbacks of the Kinect, and although it is designed to support "natural" interaction, naturalism is not always possible, and not always the best solution as we will discuss below. For 3D interaction, spatial trackers are most often used inside handheld devices. These devices typically include other inputs such as buttons, joysticks, or trackballs, making them something like a "3D mouse.
Trackers are also used to measure the user's head position and orientation. Head tracking is useful for modifying the view of a 3D environment in a natural way.
The type of spatial tracker used in a 3D UI can have a major impact on its usability , and different trackers may require different UI designs. For example, a tracker with higher latency might not be appropriate for precise object manipulation tasks, and an interface using a 3-DOF orientation tracker requires additional methods for translating the viewpoint in the 3D environment, since it does not track the user's position.
This short section can't do justice to the complex topic of spatial tracking.
As noted above, most handheld trackers include other sorts of input, because it's difficult to map all interface actions to position, orientation, or motion of the tracker. For example, to confirm a selection action, a discrete event or command is needed, and a button is much more appropriate for this than a hand motion. The Intersense IS wand is typical of such handheld trackers; it includes four standard buttons, a "trigger" button, and a 2-DOF analog joystick which is also a button in a handheld form factor.
The Kinect, because of its "controller-less" design, suffers from the lack of discrete inputs such as buttons. Generalizing this idea, we can see that almost any sort of input device can be made into a spatial input device by tracking it.
Usually this requires adding some hardware to the device, such as optical tracking markers. This extends the capability and expressiveness of the tracker, and allows the input from the device to be interpreted differently depending on its position and orientation.
For example, in my lab we have experimented with tracking multi-touch smartphones and combining the multi-touch input with the spatial input for complex object manipulation interfaces Wilkes et al. Other interesting devices, such as bend-sensitive tape, can be tracked to provide additional degrees of freedom Balakrishnan et al.
Gloves or finger trackers are another type of input device that is frequently combined with spatial trackers. Pinch gloves detect contacts between the fingers, while data gloves and finger trackers measure joint angles of the fingers. Combining these with trackers allow for interesting, natural, and expressive use of hand gestures, such as in-air typing Bowman et al.
Increasingly, however, 3D interaction is taking place with TVs or even desktop monitors, due to the use of consumer-level tracking devices meant for gaming. Differences in display configuration and characteristics can have a major impact on the design and usability of 3D UIs.
HMDs Figure 2 provide a full degree surround when combined with head tracking and can block out the user's view of the real world, or enhance the view of the real world when used in AR systems. When used for VR, HMDs keep users from seeing their own hands or other parts of their bodies, meaning that devices must be usable eyes-free, and that users may be hesitant to move around in the physical environment.
Among other considerations, for 3D UIs this means that the designer must provide a way for the user to rotate the world. The mixture of physical and virtual viewpoint rotation can be confusing and can reduce performance on tasks like visual search McMahan, With desktop monitors and TVs, however, we may not know the size of the display or the user's position relative to it, so determining the appropriate software FOV is difficult. This in turn may influence the user's ability to understand the scale of objects being displayed.
Finally, we know that display characteristics can affect 3D interaction performance. Prior research in my lab has shown, for example, that stereoscopic display can improve performance on difficult manipulation tasks Narayan et al.
The seminal papers in the field were only written in the mid- to lates, the most-cited book in the field was published in , and the IEEE Symposium on 3D User Interfaces didn't begin until There is no standard 3D UI and it's not clear that there could be, given the diversity of input devices, displays, and interaction techniques , and few well-established guidelines for 3D UI design.
Thus, it's important to have specific design principles for 3D interaction. While the 3D UI book Bowman et al. In many cases, these techniques can be reused directly or with slight modifications in new applications. The lists of techniques in the 3D UI book Bowman et al. When existing techniques are not sufficient, new techniques can sometimes be generated by combining existing technique components.
Taxonomies of technique components Bowman et al. On one hand, most of the primary metaphors for the universal tasks have probably been invented already. On the other hand, there are several reasons to believe that there are new, radically different metaphors than what we currently have. First, we know the design space of 3D interaction is very large due to the number of devices and mappings available. Second, 3D interaction design can be magical—limited only by the designer's imagination.
Third, new technologies such as the Leap Motion device with the potential for new forms of interaction are constantly appearing. For example, in a recent project in our lab, students used a combination of recent technologies multi-touch tablet, 3D reconstruction, marker-based AR tracking, and stretch sensors to enable "AR Angry Birds"—a novel form of physical interaction with both real and virtual objects in AR Figure 3.
Finally, techniques can be designed specifically for specialized tasks in various application domains. For example, we designed domain-specific interaction techniques for object cloning in the architecture and construction domain Chen and Bowman, When this principle is violated, performance suffers. Similarly, there are often problems with the mappings of input DOFs to actions.
When a high-DOF input is used for a task that requires a lower number of DOFs, task performance can be unnecessarily difficult. For example, selecting a menu item is inherently a one-dimensional task. If users need to position their virtual hands within a menu item to select it a 3-DOF input , the interface requires too much effort.
A violation of this concept, for example, would be to use a six-DOF tracker to simultaneously control the 3D position of an object and the volume of an audio clip, since those tasks cannot be integrated by the user.
This can be done by using lower-DOF input devices, by ignoring some of the input DOFs, or by using physical or virtual constraints. For example, placing a virtual 2D interface on a physical tablet prop Schmalstieg et al.
When the user's goal is simple, designers should provide simple and effortless techniques. For example, there are many general-purpose travel techniques that allow users to control the position and orientation of the viewpoint continuously, but if the user simply wants to move to a known landmark, a simple target-based technique e.