























Study with the several resources on Docsity
Earn points by helping other students or get them with a premium plan
Prepare for your exams
Study with the several resources on Docsity
Earn points to download
Earn points by helping other students or get them with a premium plan
Community
Ask the community for help and clear up your study doubts
Discover the best universities in your country according to Docsity users
Free resources
Download our free guides on studying techniques, anxiety management strategies, and thesis advice from Docsity tutors
An overview of haptics research, focusing on haptic devices, applications, rendering techniques, and human factors issues. Researchers are developing tactile and force feedback devices to enable users to sense and manipulate three-dimensional virtual objects, with applications in painting, sculpting, CAD, and more. Haptic interfaces can provide a fuller appreciation of three-dimensional objects without jeopardizing conservation standards.
What you will learn
Typology: Exams
1 / 31
This page cannot be seen from the preview
Don't miss anything!
Haptics refers to the modality of touch and as- sociated sensory feedback. Researchers working in the area are concerned with the develop- ment, testing, and refinement of tactile and force feedback devices and supporting software that permit users to sense (“feel”) and manipulate three-dimensional virtual objects with re- spect to such features as shape, weight, surface textures, and temperature. In addition to ba- sic psychophysical research on human haptics, and issues in machine haptics such as colli- sion detection, force feedback, and haptic data compression, work is being done in applica- tion areas such as surgical simulation, medical training, scientific visualization, and assistive technology for the blind and visually impaired. How can a device emulate the sense of touch? Let us consider one of the devices from SensAble Technologies. The 3 DOF (degrees-of-freedom) PHANToM is a small robot arm with three revolute joints, each connected to a computer-controlled electric DC motor. The tip of the device is attached to a stylus that is held by the user. By sending appropriate volt-
2 Introduction to Haptics Chapter 1
ages to the motors, it is possible to exert up to 1.5 pounds of force at the tip of the stylus, in any direction. The basic principle behind haptic rendering is simple: Every millisecond or so, the computer that controls the PHANToM reads the joint encoders to determine the precise po- sition of the stylus. It then compares this position to those of the virtual objects the user is trying to touch. If the user is away from all the virtual objects, a zero voltage is sent to the motors and the user is free to move the stylus (as if exploring empty space). However, if the system detects a collision between the stylus and one of the virtual objects, it drives the mo- tors so as to exert on the user’s hand (through the stylus) a force along the exterior normal to the surface being penetrated. In practice, the user is prevented from penetrating the virtual object just as if the stylus collided with a real object that transmits a reaction to the user’s hand. Different haptic devices—such as Immersion Corporation’s CyberGrasp—operate under the same principle but with different mechanical actuation systems for force genera- tion. Although the basic principles behind haptics are simple, there are significant technical challenges, such as the construction of the physical devices (cf. Chapter 4), real-time colli- sion detection (cf. Chapters 2 and 5), simulation of complex mechanical systems for precise computation of the reaction forces (cf. Chapter 2), and force control (cf. Chapters 3 and 5). Below we provide an overview of haptics research; we consider haptic devices , applications , haptic rendering , and human factors issues.
HAPTIC DEVICES
Researchers have been interested in the potential of force feedback devices such as stylus-based masters like SensAble's PHANToM (Salisbury, Brock, Massie, Swarup, & Zilles, 1995; Salisbury & Massie, 1994) as alternative or supplemental input devices to the mouse, keyboard, or joystick. As discussed above, the PHANToM is a small, desk-grounded robot that permits simulation of single fingertip contact with virtual objects through a thim- ble or stylus. It tracks the x, y, and z Cartesian coordinates and pitch, roll, and yaw of the virtual probe as it moves about a three-dimensional workspace, and its actuators communi- cate forces back to the user's fingertips as it detects collisions with virtual objects, simulating the sense of touch. The CyberGrasp, from Immersion Corporation, is an exoskeletal device that fits over a 22 DOF CyberGlove, providing force feedback. The CyberGrasp is used in conjunction with a position tracker to measure the position and orientation of the forearm in three-dimensional space. (A newly released model of the CyberGrasp is self-contained and does not require an external tracker.) Similar to the CyberGrasp is the Rutgers Master II (Burdea, 1996; Gomez, 1998; Langrana, Burdea, Ladeiji, & Dinsmore, 1997), which has an actuator platform mounted on the palm that gives force feedback to four fingers. Position tracking is done by the Polhmeus Fastrak. Alternative approaches to haptic sensing have employed vibrotactile display, which applies multiple small force vectors to the fingertip. For example, Ikei, Wakamatsu, and Fu-
4 Introduction to Haptics Chapter 1
device in a training simulation for palpation of subsurface liver tumors. They modeled tu- mors as comparatively harder spheres within larger and softer spheres. Realistic reaction forces were returned to the user as the virtual hand encountered the “tumors,” and the graphical display showed corresponding tissue deformation produced by the palpation. Fi- nite Element Analysis was used to calculate the reaction forces corresponding to deformation from experimentally obtained force/deflection curves. Researchers at the Universidade Cato- lica de Brasilia-Brasil (D’Aulignac & Balaniuk, 1999) have produced a physical simulation system providing graphic and haptic interfaces for an echographic examination of the human thigh, using a spring damper model defined from experimental data. Machaco, Moraes and Zuffo (2000) have used haptics in an immersive simulator of bone marrow harvest for trans- plant. Andrew Mor of the Robotics Institute at Carnegie Mellon (Mor, 1998) employed the PHANToM in conjunction with a 2 DOF planar device in an arthroscopic surgery simula- tion. The new device generates a moment measured about the tip of a surgical tool, thus pro- viding more realistic training for the kinds of unintentional contacts with ligaments and fi- brous membranes that an inexperienced resident might encounter. At Stanford, Balaniuk and Costa (2000) have developed a method to simulate fluid-filled objects suitable for interactive deformation by “cutting,” “suturing,” and so on. At MIT, De and Srinivasan (1998) have developed models and algorithms for reducing the computational load required to generate visual rendering of organ motion and deformation and the communication of forces back to the user resulting from tool-tissue contact. They model soft tissue as thin-walled membranes filled with fluid. Force-displacement response is comparable to that obtained in in vivo ex- periments. At Berkeley, Sastry and his colleagues (Chapter 13, this volume) are engaged in a joint project with the surgery department of the University of California at San Francisco and the Endorobotics Corporation to build dexterous robots for use inside laparoscopic and en- doscopic cannulas, as well as tactile sensing and teletactile display devices and masters for surgical teleoperation (2001). Aviles and Ranta of Novint Technologies have developed the Virtual Reality Dental Training System dental simulator (Aviles & Ranta, 1999). They em- ploy a PHANToM with four tips that mimic dental instruments; they can be used to explore simulated materials like hard tooth enamel or dentin. Giess, Evers, and Meinzer (1998) inte- grated haptic volume rendering with the PHANToM into the presurgical process of classify- ing liver parenchyma, vessel trees, and tumors. Surgeons at the Pennsylvania State Univer- sity School of Medicine, in collaboration with Cambridge-based Boston Dynamics, used two PHANToMs in a training simulation in which residents passed simulated needles through blood vessels, allowing them to collect baseline data on the surgical skill of new trainees. Iwata, Yano, and Hashimoto (1998) report the development of a surgical simulator with a “free form tissue” which can be “cut” like real tissue. There are few accounts of any system- atic testing and evaluation of the simulators described above. Gruener (1998), in one of the few research reports with hard data, expresses reservations about the potential of haptics in medical applications; he found that subjects in a telementoring session did not profit from the addition of force feedback to remote ultrasound diagnosis.
Representative Applications of Haptics 5
Although it is not yet commonplace, a few museums are exploring methods for 3D digitization of priceless artifacts and objects from their sculpture and decorative arts collec- tions, making the images available via CD-ROM or in-house kiosks. For example, the Cana- dian Museum of Civilization collaborated with Ontario-based Hymarc to use the latter's Col- orScan 3D laser camera to create three-dimensional models of objects from the museum's collection (Canarie, Inc., 1998; Shulman, 1998). A similar partnership was formed between the Smithsonian Institution and Synthonic Technologies, a Los Angeles-area company. At Florida State University, the Department of Classics has worked with a team to digitize Etruscan artifacts using the RealScan 3D imaging system from Real 3D (Orlando, Florida), and art historians from Temple University have collaborated with researchers from the Wat- son Research Laboratory's visual and geometric computing group to create a model of Michaelangelo's Pieta , using the Virtuoso shape camera from Visual Interface (Shulman, 1998). Few museums have yet explored the potential of haptics to allow visitors access to three-dimensional museum objects such as sculpture, bronzes, or examples from the decora- tive arts. The “hands-off” policies that museums must impose limit appreciation of three- dimensional objects, where full comprehension and understanding rely on the sense of touch as well as vision. Haptic interfaces can allow fuller appreciation of three-dimensional objects without jeopardizing conservation standards, giving museums, research institutes, and other conservators of priceless objects a way to provide the public with a vehicle for object explo- ration in a modality that could not otherwise be permitted (McLaughlin, Goldberg, Ellison, & Lucas, 1999). At the University of Southern California, researchers at the Integrated Me- dia Systems Center (IMSC) have digitized daguerreotype cases from the collection of the Seaver Center for Western Culture at the Natural History Museum of Los Angeles County and made them available at a PHANToM-equipped kiosk alongside an exhibition of the “real” objects (see Chapter 15, this volume). Bergamasco, Jannson and colleagues (Jansson,
Representative Applications of Haptics 7
visualization below, in the section “Assistive Technology for the Blind and Visually Im- paired.”
Haptics has also been used in aerospace and military training and simulations. There are a number of circumstances in a military context in which haptics can provide a useful substitute information source; that is, there are circumstances in which the modality of touch could convey information that for one reason or another is not available, not reliably com- municated, nor even best apprehended through the modalities of sound and vision. In some cases, combatants may have their view blocked or may not be able to divert attention from a display to attend to other information sources. Battlefield conditions, such as the presence of artillery fire or smoke, might make it difficult to hear or see. Conditions might necessitate that communications be inaudible (Transdimension, 2000). For certain applications, for ex- ample where terrain or texture information needs to be conveyed, haptics may be the most efficient communication channel. In circumstances like those described above, haptics is an alternative modality to sound and vision that can be exploited to provide low-bandwidth situation information, commands, and threat warning (Transdimension, 2000). In other cir- cumstances haptics could function as a supplemental information source to sound or vision. For example, users can be alerted haptically to interesting portions of a military simulation, learning quickly and intuitively about objects, their motions, what persons may interact with them, and so on. At the Army’s National Automotive Center, the SimTLC (Simulation Throughout the Life Cycle) program has used VR techniques to test military ground vehicles under simu- lated battlefield conditions. One of the applications has been a simulation of a distributed environment where workers at remote locations can collaborate in reconfiguring a single vehicle chassis with different weapons components, using instrumented force-feedback gloves to manipulate the three-dimensional components (National Automotive Center, 1999). The SIRE simulator (Synthesized Immersion Research Environment) at the Air Force Research Laboratory, Wright-Patterson Air Force Base, incorporated data gloves and tactile displays into its program of development and testing of crew station technologies (Wright- Patterson Air Force Base, 1997). Using tasks such as mechanical assembly, researchers at NASA-Ames have been conducting psychophysical studies of the effects of adding a 3 DOF force-feedback manipulandum to a visual display, noting that control and system dynamics have received ample research attention but that the human factors underlying successful hap- tic display in simulated environments remain to be identified (Ellis & Adelstein, n.d.). The Naval Aerospace Medical Research Laboratory has developed a “Tactile Situation Aware- ness System” for providing accurate orientation information in land, sea, and aerospace envi- ronments. One application of the system is to alleviate problems related to the spatial disori- entation that occurs when a pilot incorrectly perceives the attitude, altitude, or motion of his aircraft; some of this error may be attributable to momentary distraction, reduced visibility, or an increased workload. Because the system (a vibrotactile transducer) can be attached to a portable sensor, it can also be used in such applications as extravehicular space exploration
8 Introduction to Haptics Chapter 1
activity or Special Forces operations. Among the benefits claimed for integration of haptics with audio and visual displays are increased situation awareness, the ability to track targets and information sources spatially, and silent communication under conditions where sound is not possible or desirable (e.g., hostile environments) (Naval Aerospace Medical Research Laboratory, 2000).
An obvious application of haptics is to the user interface, in particular its repertoire of interaction techniques , loosely considered that set of procedures by which basic tasks, such as opening and closing windows, scrolling, and selecting from a menu, are performed (Kirkpatrick & Douglas, 1999). Indeed, interaction techniques have been a popular applica- tion area for 2D haptic mice like the Wingman and I-Feel, which work with the Windows interface to add force feedback to windows, scroll bars, and the like. For some of these force-feedback mice, shapes, textures, and other properties of objects (spring, damping) can be “rendered” with Javascript and the objects delivered for exploration with the haptic mice via standard Web pages. Haptics offers a natural user interface based on the human gestural system. The resistance and friction provided by stylus-based force feedback adds an intui- tive feel to such everyday tasks as dragging, sliding levers, and depressing buttons. There are more complex operations, such as concatenating or editing, for which a grasping meta- phor may be appropriate. Here the whole-hand force feedback provided by glove-based devices could convey the feeling of stacking or juxtaposing several objects or of plucking an unwanted element from a single object. The inclusion of palpable physics in virtual envi- ronments, such as the constraints imposed by walls or the effect of altered gravity on weight, may enhance the success of a user’s interaction with the environment (Adelstein & Ellis, 2000). Sometimes too much freedom to move is inefficient and has users going down wrong paths and making unnecessary errors that system designers could help them avoid by the appropriate use of built-in force constraints that encourage or require the user to do things in the “right” way (Hutchins & Gunn, 1999). Haptics can also be used to constrain the user’s interaction with screen elements, for example, by steering him or her away from unproduc- tive areas for the performance of specific tasks, or making it more difficult to trigger proce- dures accidentally by increasing the stiffness of the controls.
Most haptic systems still rely heavily on a combined visual/haptic interface. This dual modality is very forgiving in terms of the quality of the haptic rendering. This is because ordinarily the user is able to see the object being touched and naturally persuades herself that the force feedback coming from the haptic device closely matches the visual input. However, in most current haptic interfaces, the quality of haptic rendering is actually poor and, if the
10 Introduction to Haptics Chapter 1
Jansson and Billberger found that both speed and accuracy in shape identification were sig- nificantly poorer for the virtual objects. Speed in particular was affected by virtue of the fact that the exploratory procedures most natural to shape identification, grasping and manipulat- ing with both hands, could not be emulated by the single-point contact of the PHANToM tip. They also noted that subject performance was not affected by the type of PHANToM inter- face (thimble versus stylus). However, shape recognition of virtual objects with the PHAN- ToM was significantly influenced by the size of the object, with larger objects being more readily identified. The authors noted that shape identification with the PHANToM is a con- siderably more difficult task than texture recognition, in that in the case of the latter a single lateral sweep of the tip in one direction may be sufficient, but more complex procedures are required to apprehend shape. In Chapter 9 of this volume Jansson reports on his work with nonrealistic haptic rendering and with the method of successive presentation of increasingly complex scenes for haptic perception when visual guidance is unavailable. Multivis (Multimodal Visualization for Blind People) is a project currently being un- dertaken at the University of Glasgow, which will utilize force feedback, 3D sound render- ing, braille, and speech input and output to provide blind users access to complex visual dis- plays. Yu, Ramloll, and Brewster (2000) have developed a multimodal approach to provid- ing blind users access to complex graphical data such as line graphs and bar charts. Among their techniques are the use of “haptic gridlines” to help users locate data values on the graphs. Different lines are distinguished by applying two levels of surface friction to them (“sticky” or “slippery”). Because these features have not been found to be uniformly helpful to blind users, a toggle feature was added so that the gridlines and surface friction could be turned on and off. Subjects in their studies had to use the PHANToM to estimate the x and y coordinates of the minimum and maximum points on two lines. Both blind and sighted sub- jects were effective at distinguishing lines by their surface friction. Gridlines, however, were sometimes confused with the other lines, and counting the gridlines from right and left mar- gins was a tedious process prone to error. The authors recommended, based on their obser- vations, that lines on a graph should be modeled as grooved rather than raised (“engraving” rather than “embossing”), as the PHANToM tip “slips off” the raised surface of the line. Ramloll, Yu, and their colleagues (2000) note that previous work on alternatives to graphical visualization indicates that for blind persons, pitch is an effective indicator of the location of a point with respect to an axis. Spatial audio is used to assist the user in tasks such as detecting the current location of the PHANToM tip relative to the origin of a curve (Ramloll, Yu, et al., 2000). Pitches corresponding to the coordinates of the axes can be played in rapid succession to give an “overview” picture of the shape of the curve. Such global information is useful in gaining a quick overall orientation to the graph that purely local information can provide only slowly, over time. Ramloll et al. also recommend a guided haptic overview of the borders, axes, and curves—for example, at intersections of axes, applying a force in the current direction of motion along a curve to make sure that the user does not go off in the wrong direction. Other researchers working in the area of joint haptic-sonification techniques for visu- alization for the blind include Grabowski and Barner (Grabowski, 1999; Grabowski & Ba- rner, 1998). In this work, auditory feedback—physically modeled impact sound—is inte-
Issues in Haptic Rendering 11
grated with the PHANToM interface. For instance, sound and haptics are integrated such that a virtual object will produce an appropriate sound when struck. The sound varies de- pending on such factors as the energy of the impact, its location, and the user’s distance from the object (Grabowski, 1999).
I SSUES IN HAPTIC RENDERING
There are several commercial 3D digitizing cameras available for acquiring models of objects, such as the ColorScan and the Virtuoso shape cameras mentioned earlier. The latter uses six digital cameras, five black and white cameras for capturing shape information and one color camera that acquires texture information that is layered onto the triangle mesh. At USC’s IMSC one of the approaches to the digitization process begins with models acquired from photographs, using a semiautomatic system to infer complex 3-D shapes from photo- graphs (Chen & Medioni, 1997, 1999, 2001). Images are used as the rendering primitives and multiple input pictures are allowed, taken from viewpoints with different position, orientation, and camera focal length. The direct output of the IMSC program is volumetric but is converted to a surface representation for the purpose of graphic rendering. The reconstructed surfaces are quite large, on the order of 40 MB. They are decimated with a modified version of a program for surface simplification using quadric error metrics written by Garland and Heckbert (1997). The LightScribe system (formerly known as the 3Scan system) incorporates stereo vision techniques developed at IMSC, and the process of matching points between images has been fully automated. Other comparable approaches to digitizing museum objects (e.g., Synthonics) use an older version of shape-from-stereo technology that requires the cameras to be calibrated whenever the focal length or relative position of the two cameras is changed. Volumetric data is used extensively in medical imaging and scientific visualization. Currently the GHOST SDK, which is the development toolkit for the PHANToM, construes the haptic environment as scenes composed of geometric primitives. Huang, Qu, and Kauf- man of SUNY-Stony Brook have developed a new interface that supports volume rendering, based on volumetric objects, with haptic interaction. The APSIL library (Huang, Qu, & Kaufman, 1998) is an extension of GHOST. The Stony Brook group has developed success- ful demonstrations of volume rendering with haptic interaction from Computer Tomography data of a lobster, a human brain, and a human head, simulating stiffness, friction, and texture solely from the volume voxel density. The development of the new interface may facilitate working directly with the volumetric representations of the objects obtained through view synthesis methods. The surface texture of an object can be displacement mapped with thousands of tiny polygons (Srinivasan & Basdogan, 1997), although the computational demand is such that
Issues in Haptic Rendering 13
sible. Floyd proposes that the server inform the haptic client when the user has penetrated a surface in the environment, and where that contact occurred. The client uses this information to offset the coordinate system the user is operating in so that instead of having significantly penetrated the surface, the user is just within it, computes an appropriate force response, and caches the constraint implicit in the existence of that surface so that forces to impede further progress in that direction are computed on the client alone. Mark and his colleagues (Mark, Randolph, Finch, van Verth, & Taylor, 1996) have proposed a number of solutions to recurring problems in haptics, such as improving the up- date rate for forces communicated back to the user. They propose the use of intermediate representation of force through a “plane and probe” method: A local planar approximation to the user's hand location is computed when the probe or haptic tool penetrates the plane, and the force is updated at approximately 1 kHz by the force server, while the application re- computes the position of the plane and updates it at approximately 20 kHz. Balaniuk (1999) has proposed a buffer model to transmit information to the PHANToM at the necessary rate. The buffer can also be used to implement a proxy-based calculation of the haptic forces. Networked virtual reality (VR) applications may require that force and positional data be transmitted over a communication link between computers where significant and unpre- dictable delays are the norm, resulting in instability in the haptic system. The potential for significant harm to the user exists in such circumstances due to the forces that the haptic de- vices can generate. Buttolo, Oboe, Hannaford, and McNeely (1996) note that the addition of force feedback to multiuser environments demands low latency and high collision detection sampling rates. Local area networks (LANs), because of their low communication delay, may be conducive to applications in which users can touch each other, but for wide area networks, or any environment where the demands above cannot be met, Buttolo et al. pro- pose a “one-user-at-a-time” architecture. While some latency can be tolerated in “static” applications with a single user and no effect of the user's action on the 3D object, in collabo- rative environments where users make modifications to the environment it is important to make sure that any alterations from individual clients are coordinated through the server. In effect the server can queue the users so that only one can modify the object at a time and can lock the object until the new information is uploaded to the server and incorporated into the “official” version of the virtual environment. Then and only then can the next user make a modification. Delay can be tolerated under these conditions because the haptic rendering is done on a local copy of the virtual environment at each user's station. Hespanha, McLaughlin, and Sukhatme (Chapter 8, this volume) note that latency is a critical factor that governs whether two users can truly share a common haptic experience. They propose an algorithm where the nature of the interaction between two hosts is decided dynamically based on the measured network latency between them. Users on hosts that are near each other (low communication latency) are dynamically added to fast local groups. If the communication latency is high, users are allowed a slower form of interaction where they can touch and feel objects but cannot exert forces on them. Users within a fast local group experience true haptic collaboration since the system is able to resolve the interaction forces between them quickly enough to meet stability criteria.
14 Introduction to Haptics Chapter 1
Fukuda and Matsumoto (Chapter 7, this volume) have also addressed the issue of the impact of network delay on collaborative haptic environments. They conducted a study of a multiuser environment with force feedback. They found that the performance of the PHAN- ToM is sensitive to network delay, and that their SCES (Sharing Contact Entity's State) solu- tion demonstrated good performance, as compared to taking no countermeasure against de- lay. Other approaches for dealing with random time delays, including Transmission Line Modeling and Haptic Dead Reckoning, are considered in Wilson et al. (1999).
A fundamental problem in haptics is to detect contact between the virtual objects and the haptic device (a mouse, a PHANToM, a glove, etc.). Once this contact is reliably de- tected, a force corresponding to the interaction physics is generated and rendered using the probe. This process usually runs in a tight servo loop within a haptic rendering system. Lin et al. (1998, 1999) have proposed an extensible framework for contact detection that decon- structs the workspace into regions and at runtime identifies the region(s) of potential con- tacts. The algorithm takes advantage of temporal and spatial coherence by caching the con- tact geometry from the immediately prior step to perform incremental computations. Mas- carenhas et al. (Chapter 5, this volume) report on a recent application of this system to the visualization of polygonal and scientific datasets. The contact detection problem is well stud- ied in computer graphics. The reader is referred to Held (1995) and to Lin and Gottschalk (1998) for a survey. Another technique for contact detection is to generate the so-called surface contact point (SCP), which is the closest point on the surface to the actual tip of the probe. The force generation can then happen as though the probe were physically at this location rather than within the object. Existing methods in the literature generate the SCP by using the notion of a god-object (Zilles & Salisbury, 1995), which forces the SCP to lie on the surface of the virtual object. A technique which finesses contact point detection using a voxel-based ap- proach to 6 DOF haptic rendering is described in McNeely et al. (1999). The authors use a short-range force field to repel the manipulated object in order to maintain a minimum sepa- ration distance between the (static) environment and the manipulated object. At USC’s IMSC, the authors are developing algorithms for SCP generation that use information from the current contact detection cycle and past information from the contact history to predict the next SCP effectively. As a first step, we are experimenting with a well-known linear pre- dictor, the Kalman Filter, by building on our prior results in applying similar techniques to the problem of robot localization (Roumeliotis, Sukhatme, & Bekey, 1999).
Two requirements drive the force feedback research in haptics: high fidelity rendering and stability. It turns out that these two goals are somewhat conflicting because high fidelity
16 Introduction to Haptics Chapter 1
ronment. Miller, Colgate, and Freeman (1999) extended this work to virtual environments that are not necessarily passive. The drawback of virtual coupling is that it introduces haptic distortion (because the haptic interface is no longer transparent). Hannaford, Ryu, and Kim (Chapter 3, this volume) present a new method to control instability that depends on the time domain definition of passivity. They define the “Passivity Observer,” and the “Passivity Controller,” and show how they can be applied to haptic interface design in place of fixed- parameter virtual couplings. This approach minimizes haptic distortion. The work described above assumes that the human operator is passive, but poses no other constraints on her behavior. This can lead to small z -width, significant haptic distor- tion, or both. Tsai and Colgate (1995) tried to overcome this by modeling the human as a more general discrete-time linear time-invariant system. They derive conditions for stability that directly exclude the possibility of periodic oscillations for a virtual environment consist- ing of a virtual wall. Gillespie and Cutkosky (1996) address the same issue by modeling the human as a second order continuous-time system. They conclude that to make the approach practical, online estimation of the human mechanical model is needed, because the model’s parameters change from operator to operator and, even with the same operator, from posture to posture. The use of multiple-model supervisory control (Anderson et al., 1999; Hespanha et al., 2001; Morse, 1996) to estimate online the operator’s dynamics promises to bring sig- nificant advantages to the field, because it is characterized by very fast adaptation to sudden changes in the process or the control objectives. Such changes are expected in haptics due to the unpredictability of the human-in-the-loop. In fact, it is shown in Hajian and Howe (1995) that changes in the parameters of human limb dynamics become noticeable over periods of time larger than 20 ms. Although most of the work referenced above focuses on simple prototypical virtual environments, a few researchers developed systems capable of handling very complex ones. Ruspini and Khatib (Chapter 2, this volume) are among these, having developed a general framework for the dynamic simulation and haptic exploration of complex interaction be- tween generalized articulated virtual mechanical systems. Their simulation tool permits di- rect “hands-on” interaction with the virtual environment through the haptic interface.
One of the newest areas in haptics is the search for optimal methods for the descrip- tion, storage, and retrieval of moving-sensor data of the type generated by haptic devices. With such techniques we can capture the hand or finger movement of an expert performing a skilled movement and “play it back,” so that a novice can retrace the expert’s path, with real- istic touch sensation; further, we can calculate the correlation between the two exploratory paths as time series and determine if they are significantly different, which would indicate a need for further training. The INSITE system (Faisal, Shahabi, McLaughlin, & Betz, 1999) is capable of providing instantaneous comparison of two users with respect to duration, speed, acceleration, and thumb and finger forces. Techniques for recording and playing back raw haptic data (Shahabi, Ghoting, Kaghazian, McLaughlin, & Shanbhag, forthcoming; Sha- habi, Kolahdouzan, Barish, Zimmermann, Yao, & Fu, 2001) have been developed for the
Issues in Haptic Rendering 17
PHANToM and CyberGrasp. Captured data include movement in three dimensions, orienta- tion, and force (contact between the probe and objects in the virtual environment). Shahabi and colleagues address haptic data at a higher level of abstraction in Chapter 14, in which they describe their efforts to understand the semantics of hand actions (see also Eisenstein, Ghandeharizadeh, Huang, Shahabi, Shanbhag, & Zimmermann, 2001).
Haptic data compression and evaluation of the perceptual impact of lossy compression of haptic data are further examples of uncharted waters in haptics research (see Ortega, this volume, Chapter 6). Data about the user's interaction with objects in the virtual environment must be continually refreshed if they are manipulated or deformed by user input. If data are too bulky relative to available bandwidth and computational resources, there will be im- proper registration between what the user sees on screen and what he “feels.” Ortega’s work begins by analyzing data obtained experimentally from the PHANToM and the CyberGrasp, exploring compression techniques, starting with simple approaches (similar to those used in speech coding) and continuing with methods that are more specific to the haptic data. One of two lossy methods to compress the data may be employed: One approach is to use a lower sampling rate; the other is to note small changes during movement. For example, for certain grasp motions not all of the fingers are involved. Further, during the approaching and depart- ing phases tracker data may be more useful than the CyberGrasp data. Vector coding may prove to be more appropriate to encode the time evolution of a multifeatured set of data such as that provided by the CyberGrasp. For cases where the user employs the haptic device to manipulate a static object, compression techniques that rely on knowledge of the object may be more useful than the coding of an arbitrary trajectory in three-dimensional space.
The many potential applications in industry, the military, and entertainment for force feedback in multiuser environments, where two or more users orient to and manipulate the same objects, have led to work such as that of Buttolo and his colleagues (Buttolo, Oboe, & Hannaford, 1997; Buttolo, Hewitt, Oboe, & Hannaford, 1997; Buttolo, Oboe, Hannaford, & McNally, 1996), who as noted above remind us that adding haptics to multiuser environ- ments creates additional demand for frequent position sampling for collision detection and fast update. It is also reasonable to assume that in multiuser environments, there may be a hetero- genous assortment of haptic devices with which users interact with the system. One of our primary concerns thus would be to ensure proper registration of the disparate devices with the 3D environment and with each other. Of potential use in this regard is work by Iwata, Yano, and Hashimoto (1997) on LHX (Library for Haptics), which is modular software that can support a variety of different haptic displays. LHX allows a variety of mechanical con- figurations, supports easy construction of haptic user interfaces, allows networked applica-
Human Factors 19
like textures, can be multidimensional and suggest candidate dimensions such as variations in the size, height, and shape of elements. Hollins, Faldowski, Rao, and Young (1993) passed samples of 17 textures over the fingertips of subjects whose view of the samples was restricted. The subjects sorted the texture samples into categories based on similarity, and then rated the samples against a series of scales measuring well-established perceptual di- mensions such as roughness and hardness, and several other less-well studied potential di- mensions such as “slippery-sticky.” Co-occurrence data from the sorting task were converted to dissimilarities and submitted to a multidimensional scaling analysis. The researchers re- ported that there were two clear, orthogonal perceptual dimensions, “rough-smooth” and “soft-hard,” underlying the classification of samples and speculated about a possible third dimension, “springiness.” Hughes and Jansson (1994) lament the inadequacy of embossed maps and other de- vices intended to communicate information through the sense of touch, a puzzling state of affairs insomuch as perception by active touch (purposeful motion of the skin surface rela- tive to the surface of some distal object) appears to be comparatively accurate, and even more accurate than vision in apprehending certain properties such as smoothness (Hughes & Jansson, p. 302). The authors note in their critical review of the literature on active-passive equivalence that active and passive touch (as when a texture is presented to the surface of the fingers, see Hollins et al., 1993) have repeatedly been demonstrated by Lederman and her colleagues (Lederman, 1985; Lederman, Thorne, & Jones, 1986; Loomis & Lederman,
20 Introduction to Haptics Chapter 1
Work reported by Lederman, Thorne, and Jones (1986) indicates that the dominance of one system over the other in texture discrimination tasks is a function of the dimension of judg- ment being employed. In making judgments of density , the visual system tends to dominate, while the haptic system is most salient when subjects are asked to discriminate textures on the basis of roughness. Lederman, Klatzky, Hamilton, and Ramsay (1999) studied the psychophysical effects of haptic exploration speed and mode of touch on the perceived roughness of metal objects when subjects used a rigid probe, not unlike the PHANToM stylus (see also Klatzky and Lederman, Chapter 10, this volume). In earlier work, Klatzky and Lederman found that sub- jects wielding rigid stick-like probes were less effective at discriminating surface textures than with the bare finger. In a finding that points to the importance of tactile arrays to haptic perception, the authors noted that when a subject is actively exploring an object with the bare finger, speed appears to have very little impact on roughness judgments, because sub- jects may have used kinesthetic feedback about their hand movements; however, when a rigid probe is used, people should become more reliant on vibrotactile feedback, since the degree of displacement of fingertip skin no longer is commensurate with the geometry of the surface texture.
Psychophysical studies of machine haptics are now beginning to accumulate. Experi- ments performed by von der Heyde and Hager-Ross (1998) have produced classic perceptual errors in the haptic domain: For instance, subjects who haptically sorted cylinders by weight made systematic errors consistent with the classical size-weight illusion. Experiments by Jansson, Faenger, Konig, and Billberger (1998) on shape sensing with blindfolded sighted observers were described above. Ernst and Banks (2001) reported that although vision usu- ally “captures” haptics, in certain circumstances information communicated haptically (via two PHANToMs) assumes greater importance. They found that when noise is added to vis- ual data, the haptic sense is invoked to a greater degree. Ernst and Banks concluded that the extent of capture by a particular sense modality is a function of the statistical reliability of the corresponding sensory input. Kirkpatrick and Douglas (1999) argue that if the haptic interface does not support cer- tain exploratory procedures, such as enclosing an object in the case of the single-point PHANToM tip, then the quick grasping of shape that enclosure provides will have to be done by techniques that the interface does support, such as tracing the contour of the virtual object. Obviously, this is slower than enclosing. The extent to which the haptic interface supports or fails to support exploratory processes contributes to its usability. Kirkpatrick and Douglas evaluated the PHANToM interface’s support for the task of shape determining, comparing and contrasting its usability in three modes: vision only; haptics only; and haptics and vision combined, in a non-stereoscopic display. When broad exploration is required for quick object recognition, haptics alone is not likely to be very useful when the user is limited to a single finger whose explorations must be recalled and integrated to form an overall im- pression of shape. Vision alone may fail to provide adequate depth cues (e.g., the curved