Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Haptics Research: Devices, Applications, Rendering, and Human Factors, Exams of Communication

Human-Computer InteractionComputer GraphicsVirtual RealityHaptic Technology

An overview of haptics research, focusing on haptic devices, applications, rendering techniques, and human factors issues. Researchers are developing tactile and force feedback devices to enable users to sense and manipulate three-dimensional virtual objects, with applications in painting, sculpting, CAD, and more. Haptic interfaces can provide a fuller appreciation of three-dimensional objects without jeopardizing conservation standards.

What you will learn

  • How have haptic interfaces been used in painting, sculpting, and CAD?
  • What are the applications and benefits of haptic interfaces in museums and research institutions?
  • What are the challenges in providing high-quality haptic rendering?
  • How have haptic interfaces been used for texture recognition and shape recognition?
  • What are the different types of haptic devices and how do they operate?

Typology: Exams

2021/2022

Uploaded on 09/12/2022

shekara_44het
shekara_44het 🇺🇸

4.3

(7)

5 documents

1 / 31

Toggle sidebar

Related documents


Partial preview of the text

Download Haptics Research: Devices, Applications, Rendering, and Human Factors and more Exams Communication in PDF only on Docsity! C H A P T E R 1 Introduction to Haptics Margaret L. McLaughlin, João P. Hespanha, and Gaurav S. Sukhatme Haptics refers to the modality of touch and as- sociated sensory feedback. Researchers working in the area are concerned with the develop- ment, testing, and refinement of tactile and force feedback devices and supporting software that permit users to sense (“feel”) and manipulate three-dimensional virtual objects with re- spect to such features as shape, weight, surface textures, and temperature. In addition to ba- sic psychophysical research on human haptics, and issues in machine haptics such as colli- sion detection, force feedback, and haptic data compression, work is being done in applica- tion areas such as surgical simulation, medical training, scientific visualization, and assistive technology for the blind and visually impaired. How can a device emulate the sense of touch? Let us consider one of the devices from SensAble Technologies. The 3 DOF (degrees-of-freedom) PHANToM is a small robot arm with three revolute joints, each connected to a computer-controlled electric DC motor. The tip of the device is attached to a stylus that is held by the user. By sending appropriate volt- 2 Introduction to Haptics Chapter 1 ages to the motors, it is possible to exert up to 1.5 pounds of force at the tip of the stylus, in any direction. The basic principle behind haptic rendering is simple: Every millisecond or so, the computer that controls the PHANToM reads the joint encoders to determine the precise po- sition of the stylus. It then compares this position to those of the virtual objects the user is trying to touch. If the user is away from all the virtual objects, a zero voltage is sent to the motors and the user is free to move the stylus (as if exploring empty space). However, if the system detects a collision between the stylus and one of the virtual objects, it drives the mo- tors so as to exert on the user’s hand (through the stylus) a force along the exterior normal to the surface being penetrated. In practice, the user is prevented from penetrating the virtual object just as if the stylus collided with a real object that transmits a reaction to the user’s hand. Different haptic devices—such as Immersion Corporation’s CyberGrasp—operate under the same principle but with different mechanical actuation systems for force genera- tion. Although the basic principles behind haptics are simple, there are significant technical challenges, such as the construction of the physical devices (cf. Chapter 4), real-time colli- sion detection (cf. Chapters 2 and 5), simulation of complex mechanical systems for precise computation of the reaction forces (cf. Chapter 2), and force control (cf. Chapters 3 and 5). Below we provide an overview of haptics research; we consider haptic devices, applications, haptic rendering, and human factors issues. HAPTIC DEVICES Researchers have been interested in the potential of force feedback devices such as stylus-based masters like SensAble's PHANToM (Salisbury, Brock, Massie, Swarup, & Zilles, 1995; Salisbury & Massie, 1994) as alternative or supplemental input devices to the mouse, keyboard, or joystick. As discussed above, the PHANToM is a small, desk-grounded robot that permits simulation of single fingertip contact with virtual objects through a thim- ble or stylus. It tracks the x, y, and z Cartesian coordinates and pitch, roll, and yaw of the virtual probe as it moves about a three-dimensional workspace, and its actuators communi- cate forces back to the user's fingertips as it detects collisions with virtual objects, simulating the sense of touch. The CyberGrasp, from Immersion Corporation, is an exoskeletal device that fits over a 22 DOF CyberGlove, providing force feedback. The CyberGrasp is used in conjunction with a position tracker to measure the position and orientation of the forearm in three-dimensional space. (A newly released model of the CyberGrasp is self-contained and does not require an external tracker.) Similar to the CyberGrasp is the Rutgers Master II (Burdea, 1996; Gomez, 1998; Langrana, Burdea, Ladeiji, & Dinsmore, 1997), which has an actuator platform mounted on the palm that gives force feedback to four fingers. Position tracking is done by the Polhmeus Fastrak. Alternative approaches to haptic sensing have employed vibrotactile display, which applies multiple small force vectors to the fingertip. For example, Ikei, Wakamatsu, and Fu- Representative Applications of Haptics 5 Museum Display Although it is not yet commonplace, a few museums are exploring methods for 3D digitization of priceless artifacts and objects from their sculpture and decorative arts collec- tions, making the images available via CD-ROM or in-house kiosks. For example, the Cana- dian Museum of Civilization collaborated with Ontario-based Hymarc to use the latter's Col- orScan 3D laser camera to create three-dimensional models of objects from the museum's collection (Canarie, Inc., 1998; Shulman, 1998). A similar partnership was formed between the Smithsonian Institution and Synthonic Technologies, a Los Angeles-area company. At Florida State University, the Department of Classics has worked with a team to digitize Etruscan artifacts using the RealScan 3D imaging system from Real 3D (Orlando, Florida), and art historians from Temple University have collaborated with researchers from the Wat- son Research Laboratory's visual and geometric computing group to create a model of Michaelangelo's Pieta, using the Virtuoso shape camera from Visual Interface (Shulman, 1998). Few museums have yet explored the potential of haptics to allow visitors access to three-dimensional museum objects such as sculpture, bronzes, or examples from the decora- tive arts. The “hands-off” policies that museums must impose limit appreciation of three- dimensional objects, where full comprehension and understanding rely on the sense of touch as well as vision. Haptic interfaces can allow fuller appreciation of three-dimensional objects without jeopardizing conservation standards, giving museums, research institutes, and other conservators of priceless objects a way to provide the public with a vehicle for object explo- ration in a modality that could not otherwise be permitted (McLaughlin, Goldberg, Ellison, & Lucas, 1999). At the University of Southern California, researchers at the Integrated Me- dia Systems Center (IMSC) have digitized daguerreotype cases from the collection of the Seaver Center for Western Culture at the Natural History Museum of Los Angeles County and made them available at a PHANToM-equipped kiosk alongside an exhibition of the “real” objects (see Chapter 15, this volume). Bergamasco, Jannson and colleagues (Jansson, 2001) are undertaking a “Museum of Pure Form”; their group will acquire selected sculp- tures from the collections of partner museums in a network of European cultural institutions to create a digital database of works of art for haptic exploration. Haptics raises the prospect of offering museum visitors not only the opportunity to ex- amine and manipulate digitized three-dimensional art objects visually, but also to interact remotely, in real time, with museum staff members to engage in joint tactile exploration of the works of art such that someone from the museum’s curatorial staff can interact with a student in a remote classroom and together they can jointly examine an ancient pot or bronze figure, note its interesting contours and textures, and consider such questions as “What is the mark at the base of the pot?” or “Why does this side have such jagged edges?” (Hespanha, Sukhatme, McLaughlin, Akbarian, Garg, & Zhu, 2000; McLaughlin, Sukhatme, Hespanha, Shahabi, Ortega, & Medioni, 2000; Sukhatme, Hespanha, McLaughlin, Shahabi, & Ortega, 2000). 6 Introduction to Haptics Chapter 1 Painting, Sculpting, and CAD There have been a few projects in which haptic displays are used as alternative input devices for painting, sculpting, and computer-assisted design (CAD). Dillon and colleagues (Dillon, Moody, Bartlett, Scully, Morgan, & James, 2000) are developing a “fabric lan- guage” to analyze the tactile properties of fabrics as an information resource for haptic fabric sensing. At CERTEC, the Center of Rehabilitation Engineering in Lund, Sweden, Sjostrom (Sjostrom, 1997) and his colleagues have created a painting application in which the PHANToM can be used by the visually impaired; line thickness varies with the user's force on the fingertip thimble and colors are discriminated by their tactual profile. At Dartmouth, Henle and Donald (1999) developed an application in which animations are treated as pal- pable vector fields that can be edited by manipulation with the PHANToM. Marcy, Temkin, Gorman, and Krummel (1998) have developed the Tactile Max, a PHANToM plug-in for 3D Studio Max. Dynasculpt, a prototype from Interval Research Corporation (Snibbe, Anderson, & Verplank, 1998) permits sculpting in three dimensions by attaching a virtual mass to the PHANToM position and constructing a ribbon through the path of the mass through the 3D space. Gutierrez, Barbero, Aizpitarte, Carrillo, and Eguidazu (1998) have integrated the PHANToM into DATum, a geometric modeler. Objects can be touched, moved, or grasped (with two PHANToMs), and the assembly/disassembly of mechanical objects can be simulated. Visualization Haptics has also been incorporated into scientific visualization. Durbeck, Macias, Weinstein, Johnson, and Hollerbach (1998) have interfaced SCIrun, a computation software steering system, to the PHANToM. Both haptics and graphics displays are directed by the movement of the PHANToM stylus through haptically rendered data volumes. Similar sys- tems have been developed for geoscientific applications (e.g., the Haptic Workbench, Veld- kamp, Truner, Gunn, and Stevenson, 1998). Green and Salisbury (1998) have produced a convincing soil simulation in which they have varied parameters such as soil properties, plow blade geometry, and angle of attack. Researchers at West Virginia University (Van Scoy, Baker, Gingold, Martino, & Burton, 1999) have applied haptics to mobility training. They designed an application in which a real city block and its buildings could be explored with the PHANToM, using models of the buildings created in CANOMA from digital pho- tographs of the scene from the streets. At Interactive Simulations, a San Diego-based com- pany, researchers have added a haptic feedback component to Sculpt, a program for analyz- ing chemical and biological molecular structures, which will permit analysis of molecular conformational flexibility and interactive docking. At the University of North Carolina, Chapel Hill (Chapter 5, this volume), 6 DOF PHANToMs have been used for haptic render- ing of high-dimensional scientific datasets, including three-dimensional force fields and tet- rahedralized human head volume datasets. We consider further applications of haptics to Representative Applications of Haptics 7 visualization below, in the section “Assistive Technology for the Blind and Visually Im- paired.” Military Applications Haptics has also been used in aerospace and military training and simulations. There are a number of circumstances in a military context in which haptics can provide a useful substitute information source; that is, there are circumstances in which the modality of touch could convey information that for one reason or another is not available, not reliably com- municated, nor even best apprehended through the modalities of sound and vision. In some cases, combatants may have their view blocked or may not be able to divert attention from a display to attend to other information sources. Battlefield conditions, such as the presence of artillery fire or smoke, might make it difficult to hear or see. Conditions might necessitate that communications be inaudible (Transdimension, 2000). For certain applications, for ex- ample where terrain or texture information needs to be conveyed, haptics may be the most efficient communication channel. In circumstances like those described above, haptics is an alternative modality to sound and vision that can be exploited to provide low-bandwidth situation information, commands, and threat warning (Transdimension, 2000). In other cir- cumstances haptics could function as a supplemental information source to sound or vision. For example, users can be alerted haptically to interesting portions of a military simulation, learning quickly and intuitively about objects, their motions, what persons may interact with them, and so on. At the Army’s National Automotive Center, the SimTLC (Simulation Throughout the Life Cycle) program has used VR techniques to test military ground vehicles under simu- lated battlefield conditions. One of the applications has been a simulation of a distributed environment where workers at remote locations can collaborate in reconfiguring a single vehicle chassis with different weapons components, using instrumented force-feedback gloves to manipulate the three-dimensional components (National Automotive Center, 1999). The SIRE simulator (Synthesized Immersion Research Environment) at the Air Force Research Laboratory, Wright-Patterson Air Force Base, incorporated data gloves and tactile displays into its program of development and testing of crew station technologies (Wright- Patterson Air Force Base, 1997). Using tasks such as mechanical assembly, researchers at NASA-Ames have been conducting psychophysical studies of the effects of adding a 3 DOF force-feedback manipulandum to a visual display, noting that control and system dynamics have received ample research attention but that the human factors underlying successful hap- tic display in simulated environments remain to be identified (Ellis & Adelstein, n.d.). The Naval Aerospace Medical Research Laboratory has developed a “Tactile Situation Aware- ness System” for providing accurate orientation information in land, sea, and aerospace envi- ronments. One application of the system is to alleviate problems related to the spatial disori- entation that occurs when a pilot incorrectly perceives the attitude, altitude, or motion of his aircraft; some of this error may be attributable to momentary distraction, reduced visibility, or an increased workload. Because the system (a vibrotactile transducer) can be attached to a portable sensor, it can also be used in such applications as extravehicular space exploration 10 Introduction to Haptics Chapter 1 Jansson and Billberger found that both speed and accuracy in shape identification were sig- nificantly poorer for the virtual objects. Speed in particular was affected by virtue of the fact that the exploratory procedures most natural to shape identification, grasping and manipulat- ing with both hands, could not be emulated by the single-point contact of the PHANToM tip. They also noted that subject performance was not affected by the type of PHANToM inter- face (thimble versus stylus). However, shape recognition of virtual objects with the PHAN- ToM was significantly influenced by the size of the object, with larger objects being more readily identified. The authors noted that shape identification with the PHANToM is a con- siderably more difficult task than texture recognition, in that in the case of the latter a single lateral sweep of the tip in one direction may be sufficient, but more complex procedures are required to apprehend shape. In Chapter 9 of this volume Jansson reports on his work with nonrealistic haptic rendering and with the method of successive presentation of increasingly complex scenes for haptic perception when visual guidance is unavailable. Multivis (Multimodal Visualization for Blind People) is a project currently being un- dertaken at the University of Glasgow, which will utilize force feedback, 3D sound render- ing, braille, and speech input and output to provide blind users access to complex visual dis- plays. Yu, Ramloll, and Brewster (2000) have developed a multimodal approach to provid- ing blind users access to complex graphical data such as line graphs and bar charts. Among their techniques are the use of “haptic gridlines” to help users locate data values on the graphs. Different lines are distinguished by applying two levels of surface friction to them (“sticky” or “slippery”). Because these features have not been found to be uniformly helpful to blind users, a toggle feature was added so that the gridlines and surface friction could be turned on and off. Subjects in their studies had to use the PHANToM to estimate the x and y coordinates of the minimum and maximum points on two lines. Both blind and sighted sub- jects were effective at distinguishing lines by their surface friction. Gridlines, however, were sometimes confused with the other lines, and counting the gridlines from right and left mar- gins was a tedious process prone to error. The authors recommended, based on their obser- vations, that lines on a graph should be modeled as grooved rather than raised (“engraving” rather than “embossing”), as the PHANToM tip “slips off” the raised surface of the line. Ramloll, Yu, and their colleagues (2000) note that previous work on alternatives to graphical visualization indicates that for blind persons, pitch is an effective indicator of the location of a point with respect to an axis. Spatial audio is used to assist the user in tasks such as detecting the current location of the PHANToM tip relative to the origin of a curve (Ramloll, Yu, et al., 2000). Pitches corresponding to the coordinates of the axes can be played in rapid succession to give an “overview” picture of the shape of the curve. Such global information is useful in gaining a quick overall orientation to the graph that purely local information can provide only slowly, over time. Ramloll et al. also recommend a guided haptic overview of the borders, axes, and curves—for example, at intersections of axes, applying a force in the current direction of motion along a curve to make sure that the user does not go off in the wrong direction. Other researchers working in the area of joint haptic-sonification techniques for visu- alization for the blind include Grabowski and Barner (Grabowski, 1999; Grabowski & Ba- rner, 1998). In this work, auditory feedback—physically modeled impact sound—is inte- Issues in Haptic Rendering 11 grated with the PHANToM interface. For instance, sound and haptics are integrated such that a virtual object will produce an appropriate sound when struck. The sound varies de- pending on such factors as the energy of the impact, its location, and the user’s distance from the object (Grabowski, 1999). ISSUES IN HAPTIC RENDERING Acquisition of Models There are several commercial 3D digitizing cameras available for acquiring models of objects, such as the ColorScan and the Virtuoso shape cameras mentioned earlier. The latter uses six digital cameras, five black and white cameras for capturing shape information and one color camera that acquires texture information that is layered onto the triangle mesh. At USC’s IMSC one of the approaches to the digitization process begins with models acquired from photographs, using a semiautomatic system to infer complex 3-D shapes from photo- graphs (Chen & Medioni, 1997, 1999, 2001). Images are used as the rendering primitives and multiple input pictures are allowed, taken from viewpoints with different position, orientation, and camera focal length. The direct output of the IMSC program is volumetric but is converted to a surface representation for the purpose of graphic rendering. The reconstructed surfaces are quite large, on the order of 40 MB. They are decimated with a modified version of a program for surface simplification using quadric error metrics written by Garland and Heckbert (1997). The LightScribe system (formerly known as the 3Scan system) incorporates stereo vision techniques developed at IMSC, and the process of matching points between images has been fully automated. Other comparable approaches to digitizing museum objects (e.g., Synthonics) use an older version of shape-from-stereo technology that requires the cameras to be calibrated whenever the focal length or relative position of the two cameras is changed. Volumetric data is used extensively in medical imaging and scientific visualization. Currently the GHOST SDK, which is the development toolkit for the PHANToM, construes the haptic environment as scenes composed of geometric primitives. Huang, Qu, and Kauf- man of SUNY-Stony Brook have developed a new interface that supports volume rendering, based on volumetric objects, with haptic interaction. The APSIL library (Huang, Qu, & Kaufman, 1998) is an extension of GHOST. The Stony Brook group has developed success- ful demonstrations of volume rendering with haptic interaction from Computer Tomography data of a lobster, a human brain, and a human head, simulating stiffness, friction, and texture solely from the volume voxel density. The development of the new interface may facilitate working directly with the volumetric representations of the objects obtained through view synthesis methods. The surface texture of an object can be displacement mapped with thousands of tiny polygons (Srinivasan & Basdogan, 1997), although the computational demand is such that 12 Introduction to Haptics Chapter 1 force discontinuities can occur. More commonly, a “texture field” can be constructed from 2D image data. For example, as described above, Ikei, Wakamatsu, and Fukuda (1997) cre- ated textures from images converted to grayscale, then enhanced them to heighten brightness and contrast, such that the level and distribution of intensity corresponds to variations in the height of texture protrusions and retractions. Surface texture may also be rendered haptically through techniques like force perturba- tion, where the direction and magnitude of the force vector is altered using the local gradient of the texture field to simulate effects such as coarseness (Srinivasan & Basdogan, 1997). Synthetic textures, such as wood, sandpaper, cobblestone, rubber, and plastic, may also be created using mathematical functions for the height field (Anderson, 1996; Basdogan, Ho, & Srinivasan, 1997). The ENCHANTER environment (Jansson, Faenger, Konig, & Billberger, 1998) has a texture mapper which can render sinusoidal, triangular, and rectangular textures, as well as textures provided by other programs, for any haptic object provided by the GHOST SDK. In many applications of haptics, it is desirable to be able to explore and manipulate de- formable objects as well as rigid-body objects like vases and teapots. One area that IMSC researchers are beginning to explore is the development of reliable vision-based control sys- tems for robotic applications such as the acquisition of images for 3D modeling. Two topics that have been identified as crucial for the development of such systems for robotic applica- tions (e.g., 3D and 4D modeling for haptics) are the development of self-calibrating control algorithms (Hager, Chang, & Morse, 1995) and the use of single-camera image acquisition systems in feedback control. One can use images of an object taken from multiple view- points to construct a 3D model of the object to be used for haptics. To automate the proce- dure of collecting the multiple views, one needs to have a camera mounted on a computer- controlled robot arm. This is particularly important for constructing 4D models of objects whose shape is evolving (e.g., a work of art as it is being produced). From a controls per- spective the research problem is to build algorithms to position the camera. The desired posi- tion can be specified directly in terms of its Cartesian coordinates or indirectly in terms of desired locations of parts of the object in the image. The latter falls in the area of vision- based control and is much more interesting, because the use of vision in the feedback loop allows for great accuracy with not very precise, therefore relatively inexpensive, robotic ma- nipulators. Latency The realism of haptic rendering can be adversely affected by slow update rates, as can occur in the case of the extreme computation time required by real-time rendering of de- formable objects, or the delays induced by network congestion and bandwidth limitations in distributed applications. Floyd (1999) deals with the issue of computational latency and haptic fidelity in bit- mapped virtual environments. In traditional systems with some latency there is a lack of fi- delity if, say, the user penetrates a virtual object and the lag is such that there is no immedi- ate feedback of force to indicate that a collision has occurred and that penetration is not pos- Issues in Haptic Rendering 15 hv hF ev eF haptic rendering generally calls for high force-feedback gains that often lead to self-induced oscillations and instability. Inspired by electrical networks, Adams and Hannaford (1999) regard the haptic inter- face as a two-port system terminated on one side by the human operator and on the other side by the virtual environment (cf. Figure 1-1). The energy exchange between the human operator and the haptic interface is characterized by a force hF and velocity hv , whereas the exchange between the interface and the simulated virtual environment is characterized by a force eF and velocity ev . For ideal rendering, the haptic interface should be transparent (in the sense that h eF F= and h ev v= ), but stability requirements generally force the designer of the haptic interface to introduce some haptic distortion. Figure 1-1: Two-port framework for haptic interfaces. It is generally assumed that a human operator interacting with a haptic interface be- haves passively (Hogan, 1989) in the sense that he or she does not introduce energy in the system. Since most mechanical virtual environments are also passive, the stability of the overall system could be guaranteed by simply designing the interface to be passive. How- ever, as observed by Colgate, Grafing, Stanley, and Schenkel (1993), time-sampling can destroy the natural passivity of a virtual environment. In fact, these authors showed that the smaller the sampling rate, the more energy can be “generated” by a virtual wall. Several approaches have been proposed to deal with this difficulty. Colgate and Schenkel (1997) determined conditions on the simplest virtual environment (the virtual wall) that guarantee the stability of the haptic interface for any passive human operator. These conditions reflect the fact that the amount of energy generated by a time-discretized virtual wall depends on the sampling rate. They also involve the virtual wall’s stiffness and damping coefficient, posing constraints on the range of stiffness/damping parameters that can be ren- dered. This range is referred to by Colgate and Brown (1994) as the z-width of the haptic interface and is an important measure of its performance. Adams and Hannaford (1999) followed a distinct approach by advocating the introduc- tion of virtual coupling in the haptic interface so as to guarantee the stability of the system for any continuous-time passive virtual environment, even if its discrete-time version is no longer passive. The virtual coupling can be designed to provide sufficient energy dissipation to guarantee the stability of the overall system for any passive virtual environment. This ap- proach decouples the haptic interface control problem from the design of the virtual envi- human operator haptic interface virtual environment 16 Introduction to Haptics Chapter 1 ronment. Miller, Colgate, and Freeman (1999) extended this work to virtual environments that are not necessarily passive. The drawback of virtual coupling is that it introduces haptic distortion (because the haptic interface is no longer transparent). Hannaford, Ryu, and Kim (Chapter 3, this volume) present a new method to control instability that depends on the time domain definition of passivity. They define the “Passivity Observer,” and the “Passivity Controller,” and show how they can be applied to haptic interface design in place of fixed- parameter virtual couplings. This approach minimizes haptic distortion. The work described above assumes that the human operator is passive, but poses no other constraints on her behavior. This can lead to small z-width, significant haptic distor- tion, or both. Tsai and Colgate (1995) tried to overcome this by modeling the human as a more general discrete-time linear time-invariant system. They derive conditions for stability that directly exclude the possibility of periodic oscillations for a virtual environment consist- ing of a virtual wall. Gillespie and Cutkosky (1996) address the same issue by modeling the human as a second order continuous-time system. They conclude that to make the approach practical, online estimation of the human mechanical model is needed, because the model’s parameters change from operator to operator and, even with the same operator, from posture to posture. The use of multiple-model supervisory control (Anderson et al., 1999; Hespanha et al., 2001; Morse, 1996) to estimate online the operator’s dynamics promises to bring sig- nificant advantages to the field, because it is characterized by very fast adaptation to sudden changes in the process or the control objectives. Such changes are expected in haptics due to the unpredictability of the human-in-the-loop. In fact, it is shown in Hajian and Howe (1995) that changes in the parameters of human limb dynamics become noticeable over periods of time larger than 20 ms. Although most of the work referenced above focuses on simple prototypical virtual environments, a few researchers developed systems capable of handling very complex ones. Ruspini and Khatib (Chapter 2, this volume) are among these, having developed a general framework for the dynamic simulation and haptic exploration of complex interaction be- tween generalized articulated virtual mechanical systems. Their simulation tool permits di- rect “hands-on” interaction with the virtual environment through the haptic interface. Capture, Storage, and Retrieval of Haptic Data One of the newest areas in haptics is the search for optimal methods for the descrip- tion, storage, and retrieval of moving-sensor data of the type generated by haptic devices. With such techniques we can capture the hand or finger movement of an expert performing a skilled movement and “play it back,” so that a novice can retrace the expert’s path, with real- istic touch sensation; further, we can calculate the correlation between the two exploratory paths as time series and determine if they are significantly different, which would indicate a need for further training. The INSITE system (Faisal, Shahabi, McLaughlin, & Betz, 1999) is capable of providing instantaneous comparison of two users with respect to duration, speed, acceleration, and thumb and finger forces. Techniques for recording and playing back raw haptic data (Shahabi, Ghoting, Kaghazian, McLaughlin, & Shanbhag, forthcoming; Sha- habi, Kolahdouzan, Barish, Zimmermann, Yao, & Fu, 2001) have been developed for the Issues in Haptic Rendering 17 PHANToM and CyberGrasp. Captured data include movement in three dimensions, orienta- tion, and force (contact between the probe and objects in the virtual environment). Shahabi and colleagues address haptic data at a higher level of abstraction in Chapter 14, in which they describe their efforts to understand the semantics of hand actions (see also Eisenstein, Ghandeharizadeh, Huang, Shahabi, Shanbhag, & Zimmermann, 2001). Haptic Data Compression Haptic data compression and evaluation of the perceptual impact of lossy compression of haptic data are further examples of uncharted waters in haptics research (see Ortega, this volume, Chapter 6). Data about the user's interaction with objects in the virtual environment must be continually refreshed if they are manipulated or deformed by user input. If data are too bulky relative to available bandwidth and computational resources, there will be im- proper registration between what the user sees on screen and what he “feels.” Ortega’s work begins by analyzing data obtained experimentally from the PHANToM and the CyberGrasp, exploring compression techniques, starting with simple approaches (similar to those used in speech coding) and continuing with methods that are more specific to the haptic data. One of two lossy methods to compress the data may be employed: One approach is to use a lower sampling rate; the other is to note small changes during movement. For example, for certain grasp motions not all of the fingers are involved. Further, during the approaching and depart- ing phases tracker data may be more useful than the CyberGrasp data. Vector coding may prove to be more appropriate to encode the time evolution of a multifeatured set of data such as that provided by the CyberGrasp. For cases where the user employs the haptic device to manipulate a static object, compression techniques that rely on knowledge of the object may be more useful than the coding of an arbitrary trajectory in three-dimensional space. Haptic Collaboration The many potential applications in industry, the military, and entertainment for force feedback in multiuser environments, where two or more users orient to and manipulate the same objects, have led to work such as that of Buttolo and his colleagues (Buttolo, Oboe, & Hannaford, 1997; Buttolo, Hewitt, Oboe, & Hannaford, 1997; Buttolo, Oboe, Hannaford, & McNally, 1996), who as noted above remind us that adding haptics to multiuser environ- ments creates additional demand for frequent position sampling for collision detection and fast update. It is also reasonable to assume that in multiuser environments, there may be a hetero- genous assortment of haptic devices with which users interact with the system. One of our primary concerns thus would be to ensure proper registration of the disparate devices with the 3D environment and with each other. Of potential use in this regard is work by Iwata, Yano, and Hashimoto (1997) on LHX (Library for Haptics), which is modular software that can support a variety of different haptic displays. LHX allows a variety of mechanical con- figurations, supports easy construction of haptic user interfaces, allows networked applica- 20 Introduction to Haptics Chapter 1 Work reported by Lederman, Thorne, and Jones (1986) indicates that the dominance of one system over the other in texture discrimination tasks is a function of the dimension of judg- ment being employed. In making judgments of density, the visual system tends to dominate, while the haptic system is most salient when subjects are asked to discriminate textures on the basis of roughness. Lederman, Klatzky, Hamilton, and Ramsay (1999) studied the psychophysical effects of haptic exploration speed and mode of touch on the perceived roughness of metal objects when subjects used a rigid probe, not unlike the PHANToM stylus (see also Klatzky and Lederman, Chapter 10, this volume). In earlier work, Klatzky and Lederman found that sub- jects wielding rigid stick-like probes were less effective at discriminating surface textures than with the bare finger. In a finding that points to the importance of tactile arrays to haptic perception, the authors noted that when a subject is actively exploring an object with the bare finger, speed appears to have very little impact on roughness judgments, because sub- jects may have used kinesthetic feedback about their hand movements; however, when a rigid probe is used, people should become more reliant on vibrotactile feedback, since the degree of displacement of fingertip skin no longer is commensurate with the geometry of the surface texture. Machine Haptics Psychophysical studies of machine haptics are now beginning to accumulate. Experi- ments performed by von der Heyde and Hager-Ross (1998) have produced classic perceptual errors in the haptic domain: For instance, subjects who haptically sorted cylinders by weight made systematic errors consistent with the classical size-weight illusion. Experiments by Jansson, Faenger, Konig, and Billberger (1998) on shape sensing with blindfolded sighted observers were described above. Ernst and Banks (2001) reported that although vision usu- ally “captures” haptics, in certain circumstances information communicated haptically (via two PHANToMs) assumes greater importance. They found that when noise is added to vis- ual data, the haptic sense is invoked to a greater degree. Ernst and Banks concluded that the extent of capture by a particular sense modality is a function of the statistical reliability of the corresponding sensory input. Kirkpatrick and Douglas (1999) argue that if the haptic interface does not support cer- tain exploratory procedures, such as enclosing an object in the case of the single-point PHANToM tip, then the quick grasping of shape that enclosure provides will have to be done by techniques that the interface does support, such as tracing the contour of the virtual object. Obviously, this is slower than enclosing. The extent to which the haptic interface supports or fails to support exploratory processes contributes to its usability. Kirkpatrick and Douglas evaluated the PHANToM interface’s support for the task of shape determining, comparing and contrasting its usability in three modes: vision only; haptics only; and haptics and vision combined, in a non-stereoscopic display. When broad exploration is required for quick object recognition, haptics alone is not likely to be very useful when the user is limited to a single finger whose explorations must be recalled and integrated to form an overall im- pression of shape. Vision alone may fail to provide adequate depth cues (e.g., the curved References 21 shape of a teapot). Kirkpatrick and Douglas assert that the effect of haptics and vision is not additive and that the combination of them would provide a result exceeding what an additive model might predict. Kirkpatrick and Douglas (1999) also report that among the factors that influence the speed of haptic recognition of objects are the number of different object attributes that can be perceived simultaneously and the number of fingers that are employed. This work sug- gests that object exploration with devices like the PHANToM, which offer kinesthetic but not cutaneous feedback, will yield suboptimal results with respect both to exploration speed and accuracy when compared to the bare hand. It further suggests that speed and accuracy may improve with additional finger receptors. With the advent of handheld devices and the possibility of the incorporation of haptics into such devices, it is becoming increasingly important to determine just how small a haptic effect can be perceived. Dosher, Lee, and Hannaford (Chapter 12, this volume) report that users can detect haptic effects whose maximum force is about half the measured Coulomb friction level of the device and about one-third the measured static friction level. They note that their results can be expected to vary by device and that it remains to be seen whether or not a measurable effect is necessarily one that can help users accomplish their tasks. Srinivasan and Basdogan (1997) note the importance of other modalities in haptic per- ception (e.g., sounds of collision with objects, etc.). They report that with respect to object deformation, visual sensing dominates over proprioception and leads to severe misjudgments of object stiffness if the graphic display is intentionally skewed (Srinavasan, Beauregard, & Brock, 1996). Sound appears to be a less important perceptual mediator than vision. In an unpublished study by Hou and Srinivasan, reported in Srinivasan and Basdogan (1997), sub- jects navigating through a maze were found to prefer large visual-haptic ratios and small haptic workspaces. Best results were achieved in the dual-modality condition, followed by haptic only and then vision only. It is apparent that the relative contribution of visual and haptic perception will vary as a function of task, but it is also apparent, as Srinivasan and Basdogan conclude, that the inadequacies of force-feedback display (e.g., limitations of stiffness) can be overcome with appropriate use of other modalities. In Chapter 11 Jeong and Jacobson consider the question of how effective haptic and auditory displays are when combined, whether or not they interfere with one another, and how a user’s previous experi- ence with a modality affects the success of the integration and the efficacy of the multimodal display. REFERENCES Adams, R. J., & Hannaford, B. (1999). Stable haptic interaction with virtual environments. IEEE Transactions on Robotics and Automation, 15(3), 465–474. Adelstein, B. D., & Ellis, S. R. (2000). Human and system performance in haptic virtual environ- ments. Retrieved from vision.arc.nasa.gov:80/IHH/highlights/H%26S%20performance.html. 22 Introduction to Haptics Chapter 1 Aloliwi, B., & Khalil, H. K. (1998). Robust control of nonlinear systems with unmodeled dynamics. Proceedings of the 37th IEEE Conference on Decision and Control (pp. 2872–2873). Piscata- way, NJ: IEEE Customer Service. Anderson, B. D. O., Brinsmead, T. S., de Bruyne, F., Hespanha, J. P., Liberzon, D., & Morse, A. S. (2000). Multiple model adaptive control. I. Finite controller coverings. International Journal of Robust and Nonlinear Control, 10(11–12), 909–929. Anderson, T. (1996). A virtual universe utilizing haptic display. In J. K. Salisbury & M. A. Srinivasan (Eds.), Proceedings of the First PHANToM User's Group Workshop. AI Technical Report no. 1596 and RLE Technical Report no. 612. Cambridge, MA: MIT. Aviles, W., & Ranta, J. (1999). A brief presentation on the VRDTS—Virtual Reality Dental Training System. In J. K. Salisbury & M. A. Srinivasan (Eds.), Proceedings of the Fourth PHANToM User’s Group Workshop. AI Lab Technical Report No. 1675 and RLE Technical Report No. 633. Cambridge, MA: MIT. Balakrishnan, J. D., Klatzky, R. L., & Loomis, J. M (1989). Length distortion of temporally extended visual displays: Similarity to haptic spatial perception. Perception & Psychophysics, 46(4), 387– 394. Balaniuk, R. (1999). Using fast local modeling to buffer haptic data. In J. K. Salisbury & M. A. Srini- vasan (Eds.), Proceedings of the Fourth PHANToM User’s Group Workshop. AI Lab Technical Report No. 1675 and RLE Technical Report No. 633. Cambridge, MA: MIT. Balaniuk, R., & Costa, I. F. (2000). An approach for physically based soft tissue simulation suitable for haptic interaction. Preprints of the Fifth Annual PHANToM Users Group Workshop. Aspen, CO: Given Institute. Ballesteros, S., Manga, D., & Reales, J. (1997). Haptic discrimination of bilateral symmetry in 2- dimensional and 3-dimensional unfamiliar displays. Perception and Psychophysics, 59(1), 37– 50. Basdogan, C., Ho, C-H., Slater, M., & Srinavasan, M. A. (1998). The role of haptic communication in shared virtual environments. In J. K. Salisbury & M. A. Srinivasan (Eds.), Proceedings of the Third PHANToM Users Group Workshop, PUG98. AI Technical Report no. 1643 and RLE Technical Report no. 624. Cambridge, MA: MIT. Basdogan, C., Ho, C-H., & Srinivasan, M. A. (1997). A ray-based haptic rendering technique for dis- playing shape and texture of 3-D objects in virtual environments. Proceedings of the ASME Dy- namic Systems and Control Division, Dallas, TX. Berger, C., & Hatwell, Y. (1993). Dimensional and overall similarity classifications in haptics: A developmental study. Cognitive Development, 8(4), 495–516. Berger, C., & Hatwell, Y. (1995). Development of dimensional vs. global processing in haptics: The perceptual and decisional determinants of classification skills. British Journal of Developmental Psychology, 13(2), 143–162. References 25 Garland, M., & Heckbert, P. S. (1997). Surface simplification using quadric error metrics. Paper pre- sented at the annual meeting of SIGGRAPH. Giess, C., Evers, H., & Meinzer, H-P. (1998). Haptic volume rendering in different scenarios of surgi- cal planning. In J. K. Salisbury & M. A. Srinivasan (Eds.), Proceedings of the Third PHANToM User’s Group, PUG98. AI Technical Report no. 1643 and RLE Technical Report no. 624. Cam- bridge, MA: MIT. Retrieved from www.sensable.com/community/98papers/3%20giess-pug98. fm.pdf. Gillespie, B., & Cutkosky, M. (1996). Stable user-specific rendering of the virtual wall. In Proceed- ings of the ASME International Mechanical Engineering Conference and Exposition, Vol. 58, 397–406. Gomez, D. H. (1998). A dextrous hand master with force feedback for virtual reality. Unpublished Ph.D. dissertation, Rutgers, The State University of New Jersey. Grabowski, N. (1999). Structurally-derived sounds in a haptic rendering system. In J. K. Salisbury & M. A. Srinivasan (Eds.), Proceedings of the Fourth PHANToM User’s Group Workshop. AI Lab Technical Report No. 1675 and RLE Technical Report No. 633. Cambridge, MA: MIT. Grabowski, N. A., & Barner, K. E. (1998). Data visualisation methods for the blind using force feedback and sonfication. Paper presented at the SPIE Conference on Telemanipulator and Telepresence Technologies V, Boston, MA. Green, D. F., & Salisbury, J. K. (1997). Texture sensing and simulation using the PHANToM: To- wards remote sensing of soil properties. In J. K. Salisbury and M. A. Srinivasan (Eds.), Proceed- ings of the Second PHANToM User’s Group Workshop. AI Technical Report no. 1617 and RLE Technical Report no. 618, MIT, Reading, MA. Gruener, G. (1998). Telementoring using haptic communication. Unpublished Ph.D. dissertation, University of Colorado. Gutierrez, T., Barbero, J. L., Aizpitarte, M., Carillo, A. R., & Eguidazu, A. (1998). Assembly simula- tion through virtual prototypes. In J. K. Salisbury & M. A. Srinivasan (Eds.), Proceedings of the Third PHANToM User’s Group, PUG98. AI Technical Report no. 1643 and RLE Technical Re- port no. 624. Cambridge, MA: MIT. Retrieved from www.sensable.com/community/98papers/ 12%20gutpug98.pdf. Hager, G. D., Chang, W-C., & Morse, A. S. (1995). Robot hand-eye coordination based on stereo vision. IEEE Control Systems Magazine, 15, 30–39. Hajian, A. Z., & Howe, R. D. (1995). Identification of the mechanical impedance at the human finger tip. ASME Journal of Biomechanical Engineering, 119(1), 109-114. Hannaford, B., Ryu, J-H., & Kim, Y. (2001). Stable control of haptics. In M. L. McLaughlin, J. P. Hespanha, & G. S. Sukhatme (Eds.), Touch in Virtual Environments. IMSC Series in Multime- dia. New York: Prentice Hall. 26 Introduction to Haptics Chapter 1 Hatwell, Y. (1995). Children's memory for location and object properties in vision and haptics: Auto- matic or attentional processing? Cahiers de Psychologie Cognitive/Current Psychology of Cogni- tion, 14(1), 47-71. Held, M., Klosowski, J. T., & Mitchell, J. S. B. (1995). Evaluation of collision detection methods for virtual reality fly-throughs. In C. Gold and J. Robert (Eds.), Proceedings of the 7th Canadian Conference on Computational Geometry, Université Laval. Heller, M. A. (1982). Visual and tactual texture perception: Intersensory cooperation. Perception & Psychophysics, 31, 339–344. Henle, F., & Donald, B. (1999). Haptics for animation motion control. In J. K. Salisbury & M. A. Srinivasan (Eds.), Proceedings of the Fourth PHANToM User’s Group Workshop. AI Lab Tech- nical Report No. 1675 and RLE Technical Report No. 633. Cambridge, MA: MIT. Hespanha, J. P., Liberzon, D., Morse, A. S., Anderson, B. D. O., Brinsmead, T. S., & de Bruyne, F. (2001). Multiple model adaptive control, part 2: Switching. International Journal of Robust and Nonlinear Control Special Issue on Hybrid Systems in Control, 11(5), 479–496. Hespanha, J. P., McLaughlin, M. L., & Sukhatme, G. S. (2001). Haptic collaboration over the Internet. In M. L. McLaughlin, J. P. Hespanha, & G. S. Sukhatme (Eds.), Touch in virtual environments. IMSC Series in Multimedia. New York: Prentice Hall. Hespanha, J., Sukhatme, G., McLaughlin, M., Akbarian, M., Garg, R., & Zhu, W. (2000). Hetereoge- neous haptic collaboration over the Internet. Preprints of the Fifth PHANToM User’s Group Workshop, Aspen, CO. Ho, C., Basdogan, C., Slater, M., Durlach, N., & Srinivasan, M. A. (1998). An experiment on the in- fluence of haptic communication on the sense of being together. Paper presented at the British Telecom Workshop on Presence in Shared Virtual Environments, Ipswitch. Retrieved from www.cs.ucl. ac.uk/staff/m.slater/BTWorkshop/touchexp.html. Hogan, N. (1989). Controlling impedance at the man/machine interface. In Proceedings of the IEEE International Conference on Robotics and Automation, Vol. 3, 1626–1631. Scottsdale, AZ. Hollins, M., Faldowski, R., Rao, S., & F. Young (1993). Perceptual dimensions of tactile surface tex- ture: A multidimensional scaling analysis. Perception and Psychophysics, 54, 697–705. Howe, R. D. (1994). Tactile sensing and control of robotic manipulation. Journal of Advanced Robot- ics, 8(3), 245–261. Huang, C., Qu, H., & Kaufman, A. E. (1998). Volume rendering with haptic interaction. In J. K. Salis- bury & M. A. Srinivasan (Eds.), Proceedings of the Third PHANToM Users Group, PUG98. AI Technical Report no. 1643 and RLE Technical Report no. 624. Cambridge, MA: MIT. Retrieved from www sensable.com/community/98papers/2%20cwhuang-pug98.pdf. Hughes, B., & Jannson, G. (1994). Texture perception via active touch. Special Issue: Perception- movement, information and dynamics. Human Movement Science, 13(3–4), 301–333. References 27 Hutchins, M., & Gunn, C. (1999). A haptics constraints class library. In J. K. Salisbury & M. A. Srini- vasan (Eds.), Proceedings of the Fourth PHANToM User’s Group Workshop. AI Lab Technical Report No. 1675 and RLE Technical Report No. 633. Cambridge, MA: MIT. Ikei, Y., Wakamatsu, K., & Fukuda, S. (1997). Texture presentation by vibratory tactile display. Paper presented at the Virtual Reality Annual International Symposium, Albuquerque, NM. Iwata, H., Yano, H., & Hashimoto, W. (1997). LHX: An integrated software tool for haptic interface. Computers and Graphics, 21(4), 413–420. Jansson, G. (1998). Can a haptic force feedback display provide visually impaired people with useful information about texture roughness and 3D form of virtual objects? In P. Sharkey, D. Rose, & J.-I. Lindstrom (Eds.), Proceedings of the 2nd European Conference on Disability, Virtual Real- ity, and Associated Technologies (pp. 105–111). Reading, UK. Jansson, G. (2001). Perceiving complex virtual scenes with a PHANToM without visual guidance. In M. L. McLaughlin, J. P. Hespanha, & G. S. Sukhatme (Eds.), Touch in Virtual Environments. IMSC Series in Multimedia. New York: Prentice Hall. Jansson, G. (2001, June 1). Personal communication. Jansson, G., & Billberger, K. (1999). The PHANToM used without visual guidance. In Proceedings of the First PHANToM Users Research Symposium (PURS'99). Jansson, G., Faenger, J., Konig, H., & Billberger, K. (1998). Visually impaired persons' use of the PHANToM for information about texture and 3D form of virtual objects. In J. K. Salisbury & M. A. Srinivasan (Eds.), Proceedings of the Third PHANToM User’s Group, PUG98. AI Technical Report no. 1643 and RLE Technical Report no. 624. Cambridge, MA: MIT. Jeong, W., & Jacobson, D. (2001). Haptic and auditory display for multimodal information systems. In M. L. McLaughlin, J. P. Hespanha, & G. S. Sukhatme (Eds.), Touch in Virtual Environments. IMSC Series in Multimedia. New York: Prentice Hall. Johansson, A. J., & Linde, J. (1998). Using simple force feedback mechanisms to visualize structures by haptics. Paper presented at the Second Swedish Symposium of MultiModal Communications. Johansson, A. J., & Linde, J. (1999). Using simple force feedback mechanisms as haptic visualization tools. Paper presented at the 16th IEEE Instrumentation and Measurement Technology Confer- ence. Kirkpatrick, A., & Douglas, S. (1999). Evaluating haptic interfaces in terms of interaction techniques. In J. K. Salisbury & M. A. Srinivasan (Eds.), Proceedings of the Fourth PHANToM Users Group Workshop. AI Lab Technical Report No. 1675 and RLE Technical Report No. 633. Cambridge, MA: MIT. Klatzky, R., & Lederman, S. (2001). Perceiving texture through a probe. In M. L. McLaughlin, J. P. Hespanha, & G. S. Sukhatme (Eds.), Touch in Virtual Environments. IMSC Series in Multimedia New York: Prentice Hall. 30 Introduction to Haptics Chapter 1 Pai, D. K., & Reissell, L. M. (1997). Haptic interaction with multiresolution image curves. Computers and Graphics, 21(4), 405–411. Ramloll, R., Yu, W., Brewster, S., Riedel, B., Burton, M, & Dimigen, G. (2000). Constructing soni- fied haptic line graphs for the blind student: First steps. Paper presented at ASSETS 2000. Re- trieved from www.dcs.gla.ac.uk/~rayu/Publications/Assets2000.pdf. Rassmuss-Gröhn, K., & Sjöstrom, C. (1998). Using a force feedback device to present graphical in- formation to people with visual disabilities. Paper presented at the Second Swedish Symposium on Multimodal Communication, Lund, Sweden. Retrieved from www.certec.lth.se/doc/usinga/. Roumeliotis, S. I., Sukhatme, G. S., & Bekey, G. A. (1999). Smoother based 3-D attitude estimation for mobile robot localization. In Proceedings of the 1999 IEEE International Conference on Ro- botics and Automation, Detroit, MI. Ruspini, D., & Khatib., O. (2001). Simulation with contact for haptic interaction. In M. L. McLaugh- lin, J. P. Hespanha, & G. S. Sukhatme (Eds.), Touch in Virtual Environments. IMSC Series in Multimedia. New York: Prentice Hall. Salisbury, K., Brock, D., Massie, T., Swarup, N., & Zilles, C. (1995) Haptic rendering: Programming touch interaction with virtual objects. In Proceedings of the 1995 Symposium on Interactive 3D Graphics (pp. 123–130). Shahabi, C., Ghoting, A., Kaghazian, L., McLaughlin, M., & Shanbhag, G. (forthcoming). Analysis of haptic data for sign language recognition. Proceedings of the First International Conference on Universal Access in Human-Computer Interaction (UAHCI), New Orleans, LA. Hillsdale, NJ: Lawrence Erlbaum. Shahabi, C., Ghoting, A., Kaghazian, L., McLaughlin, M., & Shanhag, G. (2001). Recognition of sign language utilizing two alternative representations of haptic data. In M. L. McLaughlin, J. P. Hespanha, & G. S. Sukhatme (Eds.), Touch in Virtual Environments. IMSC Series in Multime- dia. New York: Prentice Hall. Shahabi, C., Kolahdouzan, M., Barish, G., Zimmermann, R., Yao, D., & Fu, L. (2001, June). Alterna- tive techniques for the efficient acquisition of haptic data. Paper presented at the meeting of ACM SIGMETRICS/Performance 2001, Cambridge, MA. Shulman, S. (1998). Digital antiquities. Computer Graphics World, 21(11), 34–38. Sjöstrom, C. (1997). The Phantasticon: The PHANToM for disabled children. Center of Rehabilitation Engineering Research, Lund University. Retrieved from www.certec.lth.se/. Snibbe, S., Anderson, S., & Verplank, B. (1998). Springs and constraints for 3D drawing, In J. K. Salisbury & M. A. Srinivasan (Eds.), Proceedings of the Third PHANToM Users Group Work- shop, PUG98. AI Technical Report no. 1643 and RLE Technical Report no. 624. Cambridge, MA: MIT. Retrieved from www.sensable.com/community/98papers/13%20springs98.pdf. Srinivasan, M. (2001, February). Haptic research in the MIT Touch Lab. Paper presented at the Touch in Virtual Environments Conference, University of Southern California. References 31 Srinivasan, M., & Basdogan, C. (1997). Haptics in virtual environments: Taxonomy, research status, and challenges. Computers and Graphics, 21(4), 393–404. Srinivasan, M. A., Beauregard, G. L., & Brock, D. L. (1996). The impact of visual information on haptic perception of stiffness in virtual environments. Proceedings of the ASME Dynamic Sys- tems and Control Division, Atlanta, GA. Sukhatme, G., Hespanha, J., McLaughlin, M., Shahabi, C. & Ortega, A, (2000). Touch in immersive environments. Proceedings of the EVA 2000 Conference on Electronic Imaging and the Visual Arts, Edinburgh, Scotland. Transdimension (2000). Motivations for military applications of tactile interface. Retrieved from www.transdimension.com/tactile3.htm. Tsai, J. C., & Colgate, J. E. (1995). Stability of discrete time systems with unilateral nonlinearities. Proceedings of the ASME International Mechanical Engineering Conference and Exposition. Tyler, M. (2001, February 23). Personal communication. Veldkamp, P., Turner, G., Gunn, C., & Stevenson, D. (1998). Incorporating haptics into mining indus- try applications. Proceedings of the Third PHANToM User’s Group, PUG98. AI Technical Re- port no. 1643 and RLE Technical Report no. 624. Cambridge, MA: MIT. Way, T. P., & Barner, K. E. (1997). Automatic visual to tactile translation, Part I: Human factors, Access methods and image manipulation. IEEE Transactions on Rehabilitation Engineering, 5, 81– 94. Wilson, J. P, Kline-Schoder, R., Kenton, M. A., & Hogan, N. (1999). Algorithms for network-based force feedback. In J. K. Salisbury & M. A. Srinivasan (Eds.), Proceedings of the Fourth PHAN- ToM User’s Group Workshop. AI Lab Technical Report No. 1675 and RLE Technical Report No. 633. Cambridge, MA: MIT. Wright-Patterson Air Force Base (1997). (PAM #97-091) Synthesized Immersion Research Environ- ment (SIRE). Retrieved from www.wpafb.af.mil/ascpa/factshts/scitech/sire97.htm. Yu, W., Ramloll, R., & Brewster, S. (2000). Haptics graphs for blind computer users. Paper pre- sented at the Haptic Human-Computer Interaction Workshop, University of Glasgow. Zilles, C. B., & Salisbury, J. K. (1995). A constraint-based God-object method for haptic display. Proceedings of the IEEE/RSJ International. Conference on Intelligent Robots and Systems (pp. 146–151). Pittsburgh, PA
Docsity logo



Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved