共找到 20 条结果
Electrotactile displays are a promising technology that combines the simplicity of implementation using only electronic circuits with the flexibility to deliver tactile sensations across many body sites. In recent years, their applications and research have rapidly advanced. This article provides an overview of studies published in IEEE Transactions on Haptics, covering diverse aspects such as application domains of electrotactile stimulation, techniques for stabilizing percepts, methods for efficient information rendering, and the use of electrotactile displays as tools for investigating human tactile perception. Through this review, the engineering and scientific potential of electrotactile interfaces is highlighted, along with prospects for realizing future tactile displays that are low-cost, high-density, and large-area.
Electrical stimulation virtual vibration is widely used in fields like virtual reality and medical rehabilitation. However, its parameter optimization still relies on subjective psychological evaluation. This approach lacks objective quantitative criteria. The purpose of this paper is to investigate the neural response relationship between low-frequency vibration stimulation and electrical stimulation using EEG technology, providing quantitative theoretical support for optimizing electrical stimulation parameters. This paper employs a custom-built electrical stimulation system (incorporating a flexible electrode array) to conduct tactile EEG experiments with 42 participants under both electrical stimulation and vibration stimulation conditions. We extract beta-band power spectral density (PSD) and regional permutation entropy (PE) features for the two types of stimulation. Results demonstrate that under vibration stimulation, PSD at C6 and T8 channels exhibit positive correlation with frequency, while PE in the right central region and right temporal-frontal-parietal region shows positive correlation with amplitude. During electrical stimulation, corresponding neural features follow analogous patterns. For both modalities, PSD-frequency correlations (Pearson's r > 0.84) and PE-amplitude correlations (r > 0.68) achieve statistically significant levels. Finally, we conducted classification experiments using a k-nearest neighbor (kNN) classifier, with EEG features from vibratory stimulation as the training set and EEG features from electrical stimulation as the test set. The results show that the accuracy reached 66.7% for the frequency discrimination task, while the average accuracy for the amplitude discrimination task was 67.9%. These findings demonstrate significant similarity in neural signatures elicited by low-frequency vibration stimulation (1-15 Hz) and electrical stimulation. Our study provides new insights for quantifying the refinement of electrical stimulation parameters.
The hanger reflex, in which skin shear deformation elicits a directional force sensation, provides a basis for wearable haptic guidance on the shoulders. In this study, we employ a wearable haptic device that uses four pneumatic airbags to apply controlled skin-shear deformation to both shoulders. Previous work on the hanger reflex has demonstrated that shear deformation of the skin can elicit directional force sensations, and such illusory forces are reliably perceived at the shoulder. However, the strength of these perceived forces and their potential influence on upper-body posture have not yet been quantitatively examined. To enable future applications in posture correction during manual or visually demanding tasks, the present study investigates both the perceived force magnitude and the postural effects elicited by shoulder skin-shear deformation. In Experiment 1, we quantified the magnitude of the illusory forces using a psychophysical comparison between forces elicited by skin-shear deformation and real traction forces. The device generated substantially larger perceived forces than typical illusion-based haptic methods, and the relationship between pressure and perceived force was well approximated by a linear model within the tested range. In Experiment 2, we evaluated how these deformation-elicited forces affect upper-body posture. We measured forward-backward trunk inclination under varying initial postures and stimulation directions. Consistent with the perceptual findings, the device produced consistent posture adjustments, particularly in the backward direction, with significant effects of initial posture and stimulus intensity on angular changes. Together, these findings demonstrate that shoulder hanger-reflex device provides quantifiable force sensations while also inducing measurable posture modification. The results offer essential information for individualized calibration and control-parameter design in wearable posture-support and rehabilitation systems, and they provide a foundation for future integration into ergonomic and assistive applications.
This paper investigates egocentric directional perception of azimuth and elevation in response to torso-based vibrotactile stimuli for both stimulus identification and direction association under a common experimental framework. We conducted four perceptual experiments that examined the two tasks for azimuth and elevation, using real vibrations and illusory stimuli generated by the funneling illusion. The results demonstrated that adding illusory stimuli effectively conveyed directional information with fewer tactors than using only real stimuli. Azimuth perception revealed a lateral bias, whereas elevation perception exhibited a downward bias on the dorsal torso, particularly in the upper back. However, both azimuth and elevation cues were generally consistent across vertical and horizontal torso locations. Additionally, we estimated regression models for both egocentric angles and showed that perceived directions could be estimated from actual stimulus positions. Correlation analysis revealed a weak relationship between azimuth and elevation perceptual errors, suggesting that these dimensions are processed with near-independence. Across all findings, azimuth cues proved to be more effective than elevation cues in conveying directional information. This study provides a comprehensive understanding of egocentric directions and offers practical insights for the design of torso-based vibrotactile displays.
Most commercial prostheses lack a function of natural and intuitive sensory feedback, which is one of the reasons for their high rate of abandonment. Transcutaneous Electrical Nerve Stimulation (TENS) has been proved as an effective approach to evoke sensations for limb amputees. This paper aims to explore the impact of TENS parameters on sensation evoking and evaluate its performance based on behavioral response and EEG data, involving three transradial amputees and seven able-bodied subjects. Experimental results show that the sensation thresholds were predominantly influenced by stimulus amplitude and width, and the sensation intensity increased with the increase of either amplitude or width. Variation of stimulus frequency caused transitions between sensation types, where stimulating at 10 and 100 Hz could achieve stable vibration and pressure sensations for all subjects, respectively. A stimulation encoding strategy was thereupon proposed, where a pressure sensation was to simulate grasp force and the sensation intensity to encode force amplitude. The amputees could achieve a high accuracy rate above 94.4 and 77.8% for sensation type discrimination and intensity grading, respectively, with slightly longer response time than the able-bodied. The obvious cortical activation and clear ERP components demonstrated the reliability of TENS-based sensory feedback, where the N1 component could distinguish different sensation types and intensities (p≤0.05), and the amputees had slower discriminatory responses and weaker activation in sensorimotor cortices than the able-bodied (p≤0.05). This study promisingly confirmed TENS for restoring sensory feedback in limb-amputees, providing a support for closed-loop interactions in amputee-prosthesis systems and even bionic robots.
Microneurography studies have shown that human mechanoreceptor (MR) activity is directionally sensitive to shear forces, enabling fine tactile perception and object manipulation. However, existing computational mechanotransduction models largely neglect this directional tuning, limiting their biological realism and effectiveness for tactile feedback systems such as prosthetic hands. This paper presents a Direction-Dependent Mechanotransduction Model (DDMM) that replicates the direction-specific encoding behavior observed in human tactile afferents. The model integrates multidirectional pressure and shear forces to modulate neural spiking according to the alignment between resultant shear vectors and neuron-specific attenuation profiles. Force inputs are first transformed into afferent-specific currents (SAI, RAI, RAII), which are then converted into spike trains using an Izhikevich neuron model. Simulated fingertip interactions produced directionally selective spiking frequencies ranging from 0 to 47.5 pulses per second, consistent with biological firing ranges. Directional tuning, quantified using the profile-resolved sensitivity index (PRSI), yielded values of 0.31-0.45 for selective and broad profiles, comparable with those experimentally measured directional sensitivity indices (DSI; 0.23 ± 0.18) as reported in the literature. Further experimental validation using triaxial force measurements from human fingertip press-push-lift actions confirm the model's directional sensitivity, with aligned neural attenuation profiles and shear force direction yielding a mean spiking frequency increase of approximately 350% relative to misaligned conditions. These findings establish the DDMM as a biologically inspired and computationally efficient framework for encoding tactile force direction, with potential applications in neuroprosthetics, robotic manipulation, and somatosensory modeling.
Illusory pulling sensations, induced by asymmetric vibrations applied to the fingertips, have attracted attention as a means to investigate sensorimotor processing and develop haptic interfaces. In addition, sensitivity to the illusory pulling sensation tended to decline in some older female participants, suggesting that factors related to aging and/or gender difference may be involved in this phenomenon. In this study, we aimed to clarify the contribution of somatosensory and cognitive functions to the illusory pulling sensation, focusing on aging and gender difference to examine these contributions. A total number of 60 older participants aged 63 to 80 years (30 males, 30 females) completed seven assessments, covering sensitivity to the illusory pulling sensation and a range of somatosensory and cognitive functions from vibration detection thresholds to general cognitive ability assessed by the Mini-Mental State Examination (MMSE). Consistent with prior findings, older females exhibited significantly lower sensitivity to the illusion. Interestingly, although gender differences were observed in some of the assessment items, such as hand length and performance on the parallel-setting task, none of these factors mediated the gender effect on the illusion. While age itself did not have a direct effect on the illusion, an indirect effect was observed through general cognitive function as assessed by the MMSE. These findings suggest that the illusory pulling sensation tends to weaken not only with aging, but also particularly when aging is accompanied by cognitive decline. Overall, gender and cognitive function may play key roles in individual differences in the illusory pulling sensation.
Multimodal haptic feedback that combines electrical muscle stimulation (EMS) and vibrotactile signals can create richer, more immersive experiences than those using a single modality. EMS delivers kinesthetic feedback by inducing muscle contractions, simulating force sensations that complement tactile stimuli from mechanical vibrations. However, presenting these stimuli concurrently can lead to perceptual interference, where one modality masks or alters the perception of the other. Temporal alignment between stimuli is also critical, as asynchrony can affect the perceived quality of haptic sensations. To investigate these phenomena, we conducted three user studies with a total of 40 participants (12, 12, and 16, respectively), focusing on mutual masking effects and temporal order perception between EMS and vibration. Our findings suggest that vibration can alleviate the tingling and discomfort commonly associated with EMS, effectively mitigating these unwanted sensations. Conversely, the presence of EMS increases the Just Noticeable Difference (JND) in vibration frequency discrimination, indicating a decrease in sensitivity to vibratory changes. Additionally, participants generally perceived the stimuli as simultaneous when EMS preceded vibration by 100 to 200 milliseconds. We discuss these findings and present four design guidelines for multimodal haptic rendering with EMS and vibrations in user applications.
As the market for virtual talents (e.g., VTubers) continues to expand, fan meetings have emerged as crucial events for strengthening the connection between talents and their fans. However, the limited interaction time per fan presents a challenge, making it difficult to create a highly satisfying experience through conversation alone. To enhance engagement, non-verbal interactions such as high-fives are considered an effective means for creating emotional impact and improving fan satisfaction. This paper presents a systematic evaluation of vibrohaptic feedback strategies for simulating a high-five interaction with a virtual avatar, targeting real-world applications such as virtual talent fan meetings. We conducted a user study comparing multiple vibrotactile stimuli, such as standard vibrotactile feedback, asymmetric vibration intended to induce a pulling illusion, and tendon vibration intended to induce a kinesthetic illusion, along with variations in stimulation placement. Results indicate that the mere presence of vibrotactile feedback is more crucial for enhancing the experience than the specific stimulation type employed. Furthermore, applying feedback to the palm, either solely or in combination with the wrist, proved most effective. These findings offer valuable insights for designing physical interaction with virtual talents, thereby enhancing social engagement and fan satisfaction in future fan meeting events.
Motion, conveyed through motion platform movements on which the audience is seated, is the most commonly employed four-dimensional (4D) effect. It enhances immersion and influences emotional responses, with its impact varying depending on design factors. This variation suggests the potential for optimizing audience emotions through motion design. However, previous studies have either overlooked motion design factors or focused on single motion types, limiting the generalizability of their findings. This study focused on 4D ride films, which provide first-person ride experience. We examined the effects of motion presence and developed regression models to explain the relationship between motion design factors and emotional intensity. Models were constructed for representative emotions such as confused and urgent, using maximum amplitudes as independent variables. Based on these models, we proposed motion design guidelines to optimize emotional intensity by adjusting the maximum amplitudes of pitch, roll, and heave. These findings will help 4D ride film producers elicit the intended emotional intensity in audiences.
Affective touch has attracted significant interest due to its positive effects on child development and adult well-being. Studies showed that light and slow stroke of soft brush on hairy skin elicits affective sensation by activating C-Tactile fibers. Mid-air haptics, which uses focused ultrasound to create tactile sensations in free space, offers a promising method for eliciting affective changes. However, it remains poorly understood to what extent mid-air haptics can produce the perception similar to a brush stroke on hairy skin. Here, we conducted a psychophysical study comparing the tactile perception of mid-air ultrasound stimulation with that of real materials (sandpaper, aluminum resin, urethane rubber, cloth, and soft brush). Participants rated each stimulation on the dorsum of hand along three affective touch dimensions (pleasantness, surprise, ticklishness) and three discriminative touch dimensions (smoothness, softness, warmness). Univariate analyses revealed that ultrasound stimuli differed from aluminum resin, sandpaper, and urethane rubber in one or more dimensions of discriminative touch. Ultrasound stimulation with 10 Hz amplitude modulation significantly felt more pleasant than sandpaper, while no consistent differences between ultrasound and real textures ratings was observed in surprise and ticklishness. Multivariate analyses showed that ultrasound stimuli with amplitude modulation at 10 Hz and 100 Hz were significantly closer to real textures than unmodulated ultrasound. Questionnaires showed that over half of the participants identified ultrasound stimulation as a brush or cotton. These findings suggest that mid-air haptics can evoke perceptual attributes overlapping with those of real textures, including soft brushes, on hairy skin. Moreover, amplitude modulation may enhance the perceptual resemblance to realistic textures.
Humans possess an innate ability to seamlessly coordinate movement across multiple limbs, whether driving a motor vehicle, playing a musical instrument, or performing other daily tasks. Here, supplemental sensory information, such as haptic feedback, can enhance this coordination in applications ranging from controlling teleoperated robots to prosthetic limbs and collaborative robotics. Yet, a critical gap remains in our understanding of how visual and haptic information are integrated within sensorimotor feedback systems, as well as the extent to which these sensory channels may serve as substitutes for one another. To address this gap, we conducted an experiment investigating how sensory feedback can be incorporated in a multi-limb coordination task. To determine the degree to which visual or haptic feedback dominates in multi-limb coordination, 25 participants performed a virtual cursor-to-target task using both upper limbs (via a joystick controller) and one lower limb (via a foot pedal controller). Throughout the task, we systematically manipulated visual and haptic feedback, using a vibrotactile haptic feedback algorithm that delivered task-relevant information to all three limbs. We assessed participants' task performance measures relating to trial success rates, completion times, ability to move their limbs in coordination, and overall movement efficiency. Additionally, participants completed a cognitive workload questionnaire to evaluate their perceived task difficulty level and cognitive demands. Our findings indicate that haptic feedback can effectively substitute for one degree of visual information (cursor movement along one axis). We found no significant difference between conditions where all visual cues were presented in the task and the condition where one aspect of visual feedback was replaced by haptic feedback. These results suggest that haptic feedback can, to an extent, serve as a viable alternative to visual feedback in multi-limb coordination tasks.
This article presents an evaluation of CrazyJoystick, a propeller-based handheld force-feedback device, for spatial navigation tasks in real-world and virtual reality environments. Building upon prior work validating the device's on-demand aerial deployment and directional cue discriminability, we investigated whether propeller-based kinesthetic torque cues can support effective wayfinding during continuous locomotion. We developed navigation-specific algorithms that generate egocentric directional cues using hierarchical decision logic optimized for dynamic heading corrections. Two user studies (N=12) compared CrazyJoystick with vibrotactile baselines. Study 1 evaluated multi-waypoint navigation through four path configurations in a real-world environment with visual occlusion; CrazyJoystick reduced completion time by 54.6% and path deviation by 25.8% relative to vibrotactile guidance. Study 2 assessed target localization in a VR treasure hunt with minimal lighting; CrazyJoystick enabled 43.2% faster completion and 79% higher walking speeds. Both studies revealed consistent workload reductions. These findings demonstrate that propeller-based kinesthetic feedback supports efficient navigation during sustained locomotion, establishing propeller-based torque directional cues as a viable alternative to vibrotactile encoding for dynamic wayfinding tasks.
This paper proposes a design framework for intuitive interfaces in telerobotic operation, moving beyond outcome-based evaluation toward systematic interface design. A human-centered framework is introduced to guide interface strategy selection based on learning dynamics and to support implementation-level design decisions through analysis of human sensorimotor interaction. As a case study, the framework is instantiated through an embodiment-oriented index finger interface for controlling a distally bendable surgical instrument with unconventional kinematics. A telerobotic system was implemented, and the finger interface was compared with a familiarity-driven stylus interface in a dual-session human-subject study consisting of a training session and a retention session two weeks later. Results reveal a distinction between initial and later-stage interaction characteristics. While the stylus interface exhibited higher initial performance, the finger interface showed faster learning progression and achieved higher performance after learning stabilized. These findings suggest that embodiment-oriented interface design can support intuitive control by facilitating sensorimotor adaptation beyond immediate familiarity. Although evaluated in a surgical context, the proposed framework is applicable to the design of teleoperation interfaces for robotic systems with unconventional kinematics in broader domains.
Tactile perception varies between individuals and the state of the grasp. To investigate the psychophysical effects on vibrotactile perceptions influenced by grasp type and strength of a handheld device, perceived intensities were tested against various grasp strengths and stimulus levels using 40 and 250 Hz vibrations under power grasp and precision grasp conditions. Grasp strength was measured with a pressure distribution sensor on the cylindrical device capable of presenting vibrations up to approximately 2.0 m/s$^{2}$. The analysis using a physical model supported the measured acceleration characteristics, indicating greater attenuation of low-frequency vibrations due to grasping. The psychophysical experiments indicated that the perceived intensity decreases with grasp strength at 40 Hz vibrations under power grasp, whereas the perceived intensity slightly increases with grasp strength at 250 Hz vibrations under precision grasp. On the other hand, small differences were observed for 40 Hz vibrations during precision grasp and 250 Hz vibrations during power grasp. This highlights the need for compensatory measures in low-frequency vibrations or the use of high-frequency vibrations to achieve consistent vibrotactile feedback in handheld devices across different grasping conditions.
Arm weight support (AWS) from upper-limb exoskeletons aids training by reducing fatigue and increasing the movement workspace, especially in people with acquired brain injury (ABI). Simultaneously, task-specific training could be enabled via haptic rendering, simulating the interaction forces with tangible virtual objects. However, how AWS affects somatosensory perception, such as the perception of the weight of haptically rendered virtual objects, is unclear. We therefore conducted a psychophysics experiment with 40 participants investigating the effect of partial AWS on virtual object weight perception during dynamic lifting. Participants performed the lifting task under two conditions: with and without AWS, where AWS compensated for 50% of their arm weight. An upperlimb exoskeleton equipped with a haptic hand module provided both AWS and haptic rendering of virtual objects. The virtual objects and a representation of the participant's arm and hand were visualized using immersive virtual reality.Weight perception was assessed using just-noticeable differences (JNDs). To evaluate the experimental setup, participants' estimated elbow torques were analyzed. Additionally, subjective measures of task load, motivation, and user experience with the exoskeleton were collected to explore potential secondary factors influencing weight perception. We found that partial AWS effectively reduced elbow load during dynamic lifting compared to no-AWS. The JND was not significantly different between conditions, suggesting that AWS does not hamper weight discrimination. However, participants with a larger self-reported sense of control over their movement and those who reported that the robot followed their movements more closely were associated with better weight perception. Our findings suggest that including haptic rendering into exoskeletons that provide partial AWS could be a viable solution to provide task-specific training.
Recent advancements in sensing, information processing, and actuation technologies have catalyzed significant research and commercial interest in human augmentation. Currently, human augmentation technologies are proficient at extracting information but lack diversity of means for machine-to-human communication (M2HC). Hence, this paper explores using electrotactile stimulation of the human peripheral nervous system as an alternative channel for M2HC. Electrotactile stimulation can create distinct sensations, usually in the form of vibration frequency or intensity, and each distinct sensation can be used to communicate information to a trained user in the form of icons. To maximize the number of distinct sensations that can be created, this paper proposes a model for electrotactile waveforms generation (MEWS) that overlays two frequencies over a 2 kHz carrier biphasic electrotactile waveform creating new distinct sensations. To better understand MEWS, four double blinded experiments were conducted on volunteers. Experiments I-III were done to study MEWS capabilities and limitations. Experiment IV was performed to create a list of reliably distinct waveforms using insights from experiments I-III. The results showed that using MEWS and only using a single electrode it is possible to create 13 reasonably distinct waveforms with an accuracy of 85% over a 500 ms long stimulation window.
In the past decade there has been an increasing attention towards the field of musical haptics for the listener, which concerns the creation and evaluation of systems conceived for conveying or augmenting music signals through the sense of touch. The primary purposes of these systems are to enrich the musical experience of hearing listeners and to enhance music perception for listeners with hearing loss. This paper (Part I of a two-part review) surveys the state of the art in musical haptics systems for listeners, with a focus on technologies, system architectures, and design strategies. We introduce a taxonomy that classifies existing systems according to their application domains, accessibility focus, actuator technologies, haptic rendering approaches, form factors, and deployment contexts, and we apply it to a systematically collected body of literature. Building on this analysis, we describe the core components of musical haptics systems and examine trends in wearability and connectivity, and the needed shift from laboratory prototypes toward live and ecologically-valid scenarios. We further discuss technical challenges related to offline and real-time processing, latency, synchronization, and the deployment of machine-learning techniques on embedded hardware. Part II of this review will complement the technical perspective presented here with a survey of the perceptual, emotional, and behavioral effects of musical haptics systems on both hearing and Deaf and Hard-of Hearing users.
Conventional visual-based object recognition is subject to many variables which may cause degradation, such as improper illumination and occlusion. Tactile sensing-based object recognition can assist in situations where these issues occur, enabling a system to exploit features that standard visual systems cannot identify. Tactile sensing-based object recognition involves the gathering and processing of physical features related to the interaction between a tactile sensing system such as a robot, and a physical object. This work proposes a novel object recognition pipeline driven by a multi-sensory tactile fusion model based on the state-of-the-art time-series classifier, MiniROCKET. It builds upon the authors' previously published research, which achieved state-of-the-art performance for single-modality tactile object recognition by implementing a collection of classification heads on both the ROCKET and MiniROCKET pipelines. This work demonstrates how the combination of multiple tactile sensing modalities can achieve excellent performance, exceeding the performance of current systems which use a combination of both visual and tactile systems. This research achieves a state-of-the-art performance on the PHAC-2 dataset, exceeding what was previously achieved in accuracy by 3.3% while simultaneously reducing computational costs by up to 90%.
Achieving both high torque for convincing kinesthetic feedback and a compact form factor for user comfort in magnetorheological (MR) actuators remains challenging due to the rapid degradation of torque with reduced dimensions. To address this limitation, a small-scale, high-torque MR brake with a novel dual multi-drum configuration is proposed for haptic applications. In this configuration, two identical multi-gap MR shear regions are located on both sides of an electromagnetic coil. Once an excitation is applied, the MR shear regions are simultaneously activated, effectively maximizing active shear areas within the limited volume. This configuration facilitates a significant torque output without compromising the brake's compactness. The optimized brake prototype has compact dimensions of Ø29.2×44 mm and a mass of 171.5 g, with a peak torque of 1165.4 mN$\cdot$m. The torque-to-volume and torque-to-mass ratios are 39.6 kN/m$^{2}$ and 6.8 mN$\cdot$m/g, respectively, which are higher than those of other MR brakes of comparable size. In a practical scenario, a handheld haptic device based on the brake prototype was constructed. The experimental results demonstrated controllable torque rendering, with a just noticeable difference of 54.18 $\pm$ 17.73 mN$\cdot$m and a Weber fraction of 11.91 $\pm$ 3.90%, thereby highlighting its potential for small-scale, high-torque haptic actuation.