CN115120476A - Headset device for assisting blind people to perceive based on hearing - Google Patents

Headset device for assisting blind people to perceive based on hearing Download PDF

Info

Publication number
CN115120476A
CN115120476A CN202110324809.XA CN202110324809A CN115120476A CN 115120476 A CN115120476 A CN 115120476A CN 202110324809 A CN202110324809 A CN 202110324809A CN 115120476 A CN115120476 A CN 115120476A
Authority
CN
China
Prior art keywords
sound
blind
module
information
hearing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110324809.XA
Other languages
Chinese (zh)
Inventor
侯思冰
姚舜
侯彧
葛立
赵春宇
黄震宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jiaotong University
Original Assignee
Shanghai Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiaotong University filed Critical Shanghai Jiaotong University
Priority to CN202110324809.XA priority Critical patent/CN115120476A/en
Publication of CN115120476A publication Critical patent/CN115120476A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H3/00Appliances for aiding patients or disabled persons to walk about
    • A61H3/06Walking aids for blind persons
    • A61H3/061Walking aids for blind persons with electronic detecting or guiding means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Physical Education & Sports Medicine (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Pain & Pain Management (AREA)
  • Computational Linguistics (AREA)
  • Rehabilitation Therapy (AREA)
  • Human Computer Interaction (AREA)
  • Epidemiology (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Stereophonic System (AREA)

Abstract

A hearing-based blind-assisted perception head-mounted device, comprising: wear fixed knot structure, depth camera, multichannel earphone, temperature sensor, treater and satellite navigation module, wherein: the depth camera is located above the head-wearing fixed structure, the earphones are located on two sides of the head-wearing fixed structure, the temperature sensor and the satellite navigation module are located between the head-wearing fixed structure and the depth camera, and the processor is located on the rear side of the head-wearing fixed structure and connected with the depth camera, the earphones, the satellite navigation module and the temperature sensor respectively. The invention utilizes sound simulation to intuitively help the blind to avoid obstacles by an augmented reality method.

Description

Headset device for assisting blind people to perceive based on hearing
Technical Field
The invention relates to a technology in the field of navigation equipment, in particular to a hearing-based blind person auxiliary perception head-mounted device.
Background
The existing blind obstacle avoidance technology mostly adopts a voice prompt method to remind the blind, which needs the human brain to further process characters, is not intuitive enough, and is easy to confuse when excessive information is transmitted by the characters at the same time.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides the hearing-based blind person auxiliary perception head-mounted device, which visually helps the blind person to avoid the obstacles by utilizing sound simulation and an augmented reality method.
The invention is realized by the following technical scheme:
the invention relates to a hearing-based blind auxiliary perception head-mounted device, which comprises: wear fixed knot structure, depth camera, multichannel earphone, temperature sensor, treater and satellite navigation module, wherein: the depth camera is located above the head-wearing fixed structure, the earphones are located on two sides of the head-wearing fixed structure, the temperature sensor and the satellite navigation module are located between the head-wearing fixed structure and the depth camera, and the processor is located on the rear side of the head-wearing fixed structure and connected with the depth camera, the earphones, the satellite navigation module and the temperature sensor respectively.
The processor comprises: image processing module, temperature acquisition module, speech recognition module, vision auditory sense conversion module and the sound output module that links to each other in proper order, wherein: the image processing module receives depth information acquired by the depth camera and identifies the type, the direction, the distance and the size of an object from the depth information; the temperature acquisition module acquires temperature information of an object through the temperature sensor and transmits the temperature information to the visual and auditory conversion module; the voice recognition module acquires and recognizes an instruction sent by the blind; the visual and auditory conversion module converts the type, direction, distance and size of an object into corresponding sound information with different timbres, frequencies, phase differences and intensities and transmits the sound information to the sound output module, and the sound output module correspondingly distributes the sound information to different sound channels according to the direction, distance and size of the obstacle and plays the sound information through corresponding sound channels of the earphone.
The image processing module adopts, but is not limited to, a low-delay high-precision neural network to realize the identification of the type, the direction, the distance and the size of the object.
The voice recognition module recognizes an instruction for switching a common obstacle avoidance mode or a navigation mode and a destination instruction in the navigation mode.
The sound information comprises: simulated obstacle sounds, high and low temperature warnings, road identification and guidance, and biometric and non-biometric information identification.
The transmission in the processor may be programmed in a high level language, such as: python and matlab, wherein: the audio is transmitted in a non-blocking manner.
The conversion comprises the following steps: the method comprises the steps of identification of barrier species and matching of tone, matching of the distance between a barrier and the blind and the sound intensity, matching of the size of the barrier and the number of sound sources, and matching of dynamic barriers and sound change rules.
The allocation means that: generating, by the sound output module, a final output audio matrix Y comprising a matrix Y for the object o And a matrix Y of reinforced edges e And (4) forming. I.e. Y is o +Y e Wherein: aiming at the matrix of the object, the sound of the plane sound source is approximated in a pixel-accurate mode so as to more fully utilize the depth information, the generated audio matrix has stereo information of the shape dimension of the object for each object, and the stereo information of the sound channel dimension of the multi-channel earphone is provided for richer experience of a user; the matrix for enhancing the edge is obtained by enhancing the edge sound information.
Matrix for objects
Figure BDA0002994156210000021
Where N is the number of channels, the audio vector of each channel
Figure BDA0002994156210000022
M is the number of objects detected in the frame, beta (theta) is a direction angle function, and the attenuation momentArray elements
Figure BDA0002994156210000023
a is a normalization factor, distance matrix element D ijk =||XYZ[i,k,:]-H[j,k,:]|| 2 Where i ═ 1, 2.. N, j ═ 1, 2.. M, K ═ 1, 2.. K, | | · | | is l of the matrix 2 Norm, matrix D accurate to one pixel, length of audio sequence y
Figure BDA0002994156210000024
t last 、f s Respectively the audio duration and the audio sampling rate,
Figure BDA0002994156210000025
indicating a dot-column delay matrix for an audio sequence rounded down
Figure BDA0002994156210000026
Relative distance matrix D rel Min (D, 1) is the minimum value of the D matrix in each row direction, and u is the sound velocity in the current environment.
When the audio matrix to be generated is Y ∈ R L×N Wherein: n is the number of channels, the object position matrix returned by the depth camera and the color camera
Figure BDA0002994156210000027
K is the maximum number of pixels occupied by an object in an image, i.e., the number of pixels included in the contour, and the precision of the XYZ matrix is the coordinates of each pixel included in each object. Modeled vocal tract position matrix
Figure BDA0002994156210000028
Figure BDA0002994156210000029
Generating a distance matrix D ∈ R N×M×K R is real number, and the distance matrix is used for attenuating the sound pressure according to an empirical formula
Figure BDA00029941562100000210
Figure BDA00029941562100000211
Obtaining an attenuation matrix caused by attenuation, wherein Q is a directivity factor considering the indoor position of the point sound source; r is distance between the measuring point and the sound source and the environmental constant
Figure BDA00029941562100000212
Is an indoor average sound absorption coefficient; s is the total surface area in the chamber; since the phase difference between the channels in the present problem provides azimuth information, only the distance difference is of interest when processing the phase, and thus the distance matrix D is converted into a relative distance matrix, and it is obvious that D is obtained after processing rel Each column has an element of 0 corresponding to the reference channel at the time of localization. And obtaining a point column Delay matrix for the audio sequence based on the relative distance matrix, wherein the point column Delay matrix Delay and the corresponding sound source sequence y are used for delaying the audio sequence.
The edge enhancement means that the edge of an object is regarded as a linear sound source, and a comprehensive sound field when the sound reaches two ears is obtained through calculation, and the algorithm can adopt the following steps: mapping the pixel coordinates of the detected edge endpoint to camera coordinates by a camera internal reference matrix, obtaining actual coordinates by combining depth information, namely edge lines of the real world, and obtaining a sound pressure function of a sound source of the edge lines by integrating a sound pressure attenuation formula in a free sound field to obtain a distance matrix, thereby generating a matrix Y for strengthening the edge e . The hough transform may be used for edge detection, that is, an imperfect instance of a candidate object is found within a certain type of shape by performing a voting procedure in a parameter space in which the candidate object is obtained by calculating a local maximum in an accumulator space constructed by a hough transform algorithm.
The hough transform is to detect straight lines (line segments) from black and white images, can tolerate gaps in feature boundary description, and is relatively free from the influence of image noise, and specifically comprises the following steps:
let the pixel coordinates of the end points of the detected edge be x A ,x B From camera internal reference matrix
Figure BDA0002994156210000031
Wherein f is x =f/dx,f y F is the focal length (u) 0 ,v 0 ) Is the coordinate of the origin of the image coordinate system in the pixel coordinate system, dx and dy are the physical dimensions of each pixel in the x and y directions of the image plane respectively; mapping pixel coordinates to camera coordinates X A ,X B Multiplying the depth Z obtained by the depth camera to obtain a coordinate in the real world, wherein the coordinate is an edge line in the real world;
for the edge line, the sound pressure function of the edge line sound source is obtained by integrating the sound pressure attenuation formula in the free sound field:
Figure BDA0002994156210000032
wherein: d is the distance between the observation point and the center line of the edge line sound source, theta A ,θ B Is the angle between the observation point and A, B, thereby generating a matrix Y of reinforced edges e
Technical effects
The invention integrally solves the defects that the prior art can measure the direction of the barrier but can not distinguish the specific type of the barrier, and can not measure the size and the distance of the object, and meanwhile, the prior art can only measure a static object and can not judge the motion state of a dynamic object. Compared with the prior art, the method measures the size and the distance of the obstacle in front of the blind person, and transmits the frequency and the intensity of the corresponding sound to the blind person, so that the blind person can intuitively feel the physical state of the obstacle. The blind person obstacle dynamic state monitoring system is provided with a measuring module aiming at the dynamic obstacle, and can transmit the motion state of the dynamic obstacle to the blind person through the change of sound so as to help the blind person to avoid the dynamic obstacle.
Drawings
FIG. 1 is an oblique view of the present embodiment;
FIG. 2 is a front view of the present embodiment;
FIG. 3 is a rear view of the present embodiment
In the figure: a head-mounted fixed structure 1, a depth camera 2, an earphone 3, a temperature sensor 4, a processor 5, a power supply 6, a GPS 7;
fig. 4 is a flowchart of the operation of the device in the normal obstacle avoidance mode;
FIG. 5 is a diagram of a headphone channel profile;
FIG. 6 is a graph of image processing results for an exemplary embodiment;
FIG. 7 is a diagram of audio visualization results according to an embodiment.
Detailed Description
As shown in fig. 1 and 2, the present embodiment includes: wear fixed knot structure 1, depth camera 2, earphone 3, temperature sensor 4, processor 5, power 6 and satellite navigation module 7, wherein: the depth camera 2 is located above the head-wearing fixing structure 1, the earphones 3 are located on two sides of the head-wearing fixing structure 1, the temperature sensor 4 and the satellite navigation module 7 are located between the depth camera 2 and the earphones 3, and the processor 5 is connected with the power supply 6, located on the rear side of the head-wearing fixing structure 1 and connected with the depth camera 2, the earphones 3, the temperature sensor 4 and the satellite navigation module 7 respectively.
The depth camera 2 is
Figure BDA0002994156210000041
RealSense TM Depth Camera D435i Camera.
The earphone 3 is a physical 7.1 channel earphone, such as Razertiamat 7.1 V2
The satellite navigation module is a GPS.
The processor 5 comprises: image processing module, temperature acquisition module, speech recognition module, vision auditory sense conversion module and the sound output module that links to each other in proper order, wherein: the image processing module receives the depth information and identifies the outline in the depth picture to obtain object information, and the object temperature information acquired by the temperature acquisition module is transmitted to the visual and auditory conversion module; the voice recognition module recognizes an instruction sent by the blind; the visual and auditory conversion module transmits the object information corresponding to the tone color and frequency of the corresponding sound and the phase difference and the intensity of the sound of different sound channels to the sound output module; the sound output module transmits the sound information to the corresponding sound channel of the headphone 3.
The object information includes: category, orientation, distance, and size.
The sound information comprises: simulated obstacle sounds, high and low temperature warnings, road identification and guidance, and biometric and non-biometric information identification.
The vision and hearing conversion module identifies and matches with tone according to the types of obstacles, matches the distance between the obstacles and the blind and the sound intensity, matches the size of the obstacles and the number of sound sources, and matches the dynamic obstacle and sound change rule, namely, the synthesized and simulated sound is used for representing the types of objects, such as wooden objects, for example, tables, chairs and the like, and dull wooden tone is used; a metallic article using a relatively bright metallic tone; objects that have created an inherent impression to a person, such as a person, are represented using footsteps; a computer using an audible representation of a keystroke on a keyboard. The distance between the object and the blind person and the intensity of the sound accord with a linear relation, and the closer the object is to the blind person, the stronger the intensity of the sound emitted. The number of sound sources corresponding to an object depends on the size distribution of the object, and for small objects such as cups, mobile phones, computers, etc., a central sound source is used, while for large objects such as tables, beds, doors, etc., a boundary and central multipoint sound source is used. For dynamic obstacles, the doppler effect is applied, and the rate of movement of an object is expressed by the change in frequency of sound, the object is close to the blind, the higher the frequency of sound becomes and the higher the speed of approach, the object is far from the blind, and the lower the frequency of sound becomes and the farther the speed of sound becomes.
The sound output module is corresponding to different sound channels to output according to the position, distance, size and/or temperature of the obstacle, and comprises: the multi-channel headphones have 8 channels in total, wherein: one is a middle sound channel, the left side and the right side are respectively provided with 6 sound channels of a front surround, a side surround and a rear surround, and a bass sound channel; and when the simulation object is positioned in different directions, the phase difference and the intensity difference of the sound received by different sound channels apply the obtained corresponding relation to the simulation sound sources in different directions. For example: when the temperature sensor measures an object with the temperature higher than 65 ℃, a high-temperature warning is sent to the blind; when an object with the temperature lower than 0 ℃ is measured, low-temperature warning is given to the blind; for dynamic objects with the temperature of 25-45 ℃, the blind person can be regarded as a living body and can transmit special sound to the blind person.
The voice recognition module recognizes an instruction for switching a common obstacle avoidance mode and a navigation mode under the standby payment, and recognizes an instruction for the blind person to input a destination under the navigation mode.
The common obstacle avoidance mode is as follows: the relative position of a user and a depth camera 2 is determined through a head-mounted fixed structure 1, visual information collected by the depth camera 2 and temperature information collected by a temperature sensor 4 are converted into sound information after being processed by a processor 5 and transmitted to a physical 7.1 sound channel earphone, the high reduction degree of the physical 7.1 sound channel earphone on three-dimensional space sound is utilized, sound is transmitted to a blind person, the type, specific azimuth and distance of an obstacle are judged through distinguishing the tone, intensity and frequency information of the sound, and therefore the obstacle avoiding effect is achieved.
The navigation mode is as follows: the blind person sends a destination instruction, a voice recognition module in the processor 5 recognizes the destination and transmits the destination instruction to the satellite navigation module 7, the satellite navigation module plans a route after receiving the instruction and transmits the route to the processor, and a sound output module of the processor simulates road sound of a target route through an earphone and transmits the road sound to the blind person, so that the navigation effect is achieved.
Through specific practical experiments: under a common obstacle avoidance mode, for an indoor environment, a blind person wearing the device can know the structures of a room, such as walls, doors and windows, and the like, and know the layout of the room, such as tables, chairs, televisions, computers, and the like, and the details of the room, such as the directions of cups and mobile phones, and the like, through a prompt sound; the device can prompt the blind persons of the positions of lane lines and trees in outdoor environments, and particularly prompt the blind persons of the speed and the direction of the blind persons for dynamic obstacles such as automobiles, bicycles, pedestrians and the like, so as to help the blind persons to avoid obstacles and walk.
Example of a practical implementation scenario: as shown in fig. 6, the type and size are recognized by the live-action picture taken by the depth camera, and the distance between the obstacle and the blind is obtained by the depth image. Calculating the obtained information of the obstacle through the algorithm, transmitting the required sound to the earphone, and visualizing the sound as shown in fig. 7, wherein the ordinate corresponds to the amplitude of the sound, and the abscissa corresponds to the number of sound emitting points, wherein: sound outputted by two persons in the figure, sound outputted by TV, chair and mouse, and sound outputted by the above five objects.
Compared with the prior art, the method and the device measure the size and the distance of the obstacle in front of the blind person, and transmit the frequency and the intensity of the corresponding sound to the blind person, so that the blind person can intuitively feel the state of the obstacle. The blind obstacle avoidance system is provided with a measuring module aiming at the dynamic obstacle, and can transmit the motion state of the dynamic obstacle to the blind through the change of sound, so as to help the blind avoid the dynamic obstacle. In addition, the blind person warning device is additionally provided with a temperature measuring module, and can give out low-temperature or high-temperature warning to the blind person; the function of distinguishing the living body from the non-living body is increased, the blind can better know the previous environment, and a layer of protective measures is added. The invention adds a navigation mode, the blind selects the common obstacle avoidance mode or the navigation mode, the blind inputs the destination through a voice command in the navigation mode, and the device simulates the road sound of a target route to guide the blind, thereby realizing the navigation function. And the modules are serially connected by using multi-thread programming, a pure python and python + matlab mixed programming mode is adopted, a data stream non-blocking programming mode is adopted when audio is generated, and target identification is carried out by combining a low-delay high-precision neural network, so that a user can obtain rich experience of low delay, high precision and multi-information.
The foregoing embodiments may be modified in many different ways by those skilled in the art without departing from the spirit and scope of the invention, which is defined by the appended claims and all changes that come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein.

Claims (9)

1. A hearing-based blind assisted perception head mounted device, comprising: wear fixed knot structure, depth camera, multichannel earphone, temperature sensor, treater and satellite navigation module, wherein: the processor is positioned on the rear side of the head-mounted fixed structure and is respectively connected with the depth camera, the earphone, the satellite navigation module and the temperature sensor;
the processor comprises: image processing module, temperature acquisition module, speech recognition module, vision auditory sense conversion module and the sound output module that links to each other in proper order, wherein: the image processing module receives depth information acquired by the depth camera and identifies the type, the direction, the distance and the size of an object from the depth information; the temperature acquisition module acquires temperature information of an object through the temperature sensor and transmits the temperature information to the visual and auditory conversion module; the voice recognition module acquires and recognizes an instruction sent by the blind; the visual and auditory conversion module converts the type, direction, distance and size of an object into corresponding sound information with different timbres, frequencies, phase differences and intensities and transmits the sound information to the sound output module, and the sound output module correspondingly distributes the sound information to different sound channels according to the direction, distance and size of the obstacle and plays the sound information through corresponding sound channels of the earphone;
the conversion comprises the following steps: the method comprises the steps of identification of barrier species and matching of tone, matching of the distance between a barrier and the blind and the sound intensity, matching of the size of the barrier and the number of sound sources, and matching of dynamic barriers and sound change rules.
2. The hearing-based head-mounted device for assisting perception by blind persons as claimed in claim 1, wherein the image processing module recognizes the kind, orientation, distance and size of the object by using a neural network.
3. The hearing-based head-mounted device for assisting perception of the blind as claimed in claim 1, wherein the voice recognition module recognizes an instruction to switch a general obstacle avoidance mode or a navigation mode and an instruction to recognize a destination in the navigation mode.
4. The hearing-based head-mounted device for assisting blind perception according to claim 1, wherein the visual-auditory conversion module identifies and matches timbre, distance between obstacles and the blind and sound intensity, size of obstacles and number of sound sources, and dynamic obstacles and sound change rule according to obstacle species.
5. The hearing-based head set for assisting perception by blind persons according to claim 1, wherein the sound output module outputs corresponding to different sound channels according to the direction, distance, size and/or temperature of the obstacle, the sound output module comprising: the multi-channel headphones have 8 channels in total, wherein: one is a middle sound channel, the left side and the right side are respectively provided with 6 sound channels of a front surround, a side surround and a rear surround, and a bass sound channel; and when the simulation object is positioned in different directions, the phase difference and the intensity difference of the sound received by different sound channels apply the obtained corresponding relation to the simulation sound sources in different directions.
6. The hearing-based head set for assisting blind perception according to claim 1, wherein the assignment is performed by a sound output module to generate a final output audio matrix Y including a matrix Y for an object o And a matrix Y of reinforced edges e I.e. Y is o +Y e Wherein: aiming at the matrix of the object, the sound of the plane sound source is approximated in a pixel-accurate mode so as to more fully utilize the depth information, the generated audio matrix has stereo information of the shape dimension of the object for each object, and the stereo information of the sound channel dimension of the multi-channel earphone is provided for richer experience of a user; the matrix for enhancing the edge is obtained by enhancing the edge sound information.
7. The hearing-based blind auxiliary perception head-mounted device according to claim 1 or 3, wherein the voice recognition module recognizes an instruction for switching between a normal obstacle avoidance mode and a navigation mode in a standby state, and recognizes an instruction for the blind to input a destination in the navigation mode.
8. The hearing-based head-mounted device for assisting perception by blind persons as claimed in claim 7, wherein the general obstacle avoidance mode is: the relative position of a user and a depth camera is determined through a head-mounted fixed structure, visual information acquired by the depth camera and temperature information acquired by a temperature sensor are converted into sound information after being processed by a processor and transmitted to a physical 7.1 sound channel earphone, sound is transmitted to a blind person by utilizing the high reduction degree of the physical 7.1 sound channel earphone on three-dimensional space sound, and the type, specific direction and distance of an obstacle are judged through distinguishing the tone color, intensity and frequency information of the sound, so that the effect of avoiding the obstacle is achieved;
the navigation mode is as follows: the blind person sends a destination instruction, a voice recognition module in the processor recognizes the destination and transmits the destination instruction to the satellite navigation module, the satellite navigation module plans a route after receiving the instruction and transmits the route to the processor, and a sound output module of the processor simulates road sound of a target route through an earphone and transmits the road sound to the blind person, so that the navigation effect is achieved.
9. The hearing-based blind aided perception headset of claim 1 or 5 wherein the switching of the sound output module comprises: the method comprises the steps of identification of barrier species and matching of tone, matching of the distance between a barrier and the blind and the sound intensity, matching of the size of the barrier and the number of sound sources, and matching of dynamic barriers and sound change rules.
CN202110324809.XA 2021-03-26 2021-03-26 Headset device for assisting blind people to perceive based on hearing Pending CN115120476A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110324809.XA CN115120476A (en) 2021-03-26 2021-03-26 Headset device for assisting blind people to perceive based on hearing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110324809.XA CN115120476A (en) 2021-03-26 2021-03-26 Headset device for assisting blind people to perceive based on hearing

Publications (1)

Publication Number Publication Date
CN115120476A true CN115120476A (en) 2022-09-30

Family

ID=83375098

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110324809.XA Pending CN115120476A (en) 2021-03-26 2021-03-26 Headset device for assisting blind people to perceive based on hearing

Country Status (1)

Country Link
CN (1) CN115120476A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116035875A (en) * 2023-01-29 2023-05-02 中航华东光电(上海)有限公司 HRTF earphone system with intelligent obstacle avoidance reminding function

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116035875A (en) * 2023-01-29 2023-05-02 中航华东光电(上海)有限公司 HRTF earphone system with intelligent obstacle avoidance reminding function

Similar Documents

Publication Publication Date Title
Bai et al. Smart guiding glasses for visually impaired people in indoor environment
CN108227914B (en) Transparent display device, control method using the same, and controller thereof
US10721521B1 (en) Determination of spatialized virtual acoustic scenes from legacy audiovisual media
US7725547B2 (en) Informing a user of gestures made by others out of the user's line of sight
Bujacz et al. Sonification: Review of auditory display solutions in electronic travel aids for the blind
Schauerte et al. Multimodal saliency-based attention for object-based scene analysis
CN113196390B (en) Auditory sense system and application method thereof
CN113692750A (en) Sound transfer function personalization using sound scene analysis and beamforming
CN108245385A (en) A kind of device for helping visually impaired people's trip
TW202014992A (en) System and method for simulating expression of virtual facial model
KR20220060534A (en) Personalized equalization of audio output using 3D reconstruction of the user's ear
Blessenohl et al. Improving indoor mobility of the visually impaired with depth-based spatial sound
CN114365510A (en) Selecting spatial positioning for audio personalization
CN115120476A (en) Headset device for assisting blind people to perceive based on hearing
KR102057393B1 (en) Interactive audio control system and method of interactively controlling audio
CN215229965U (en) Headset device for assisting blind people to perceive based on hearing
Shen et al. A system for visualizing sound source using augmented reality
CN111031468B (en) Visual auxiliary method and device based on individualized HRTF stereo
US10728657B2 (en) Acoustic transfer function personalization using simulation
Fusiello et al. A multimodal electronic travel aid device
US11671756B2 (en) Audio source localization
Iyama et al. Visualization of auditory awareness based on sound source positions estimated by depth sensor and microphone array
KR20160090781A (en) Apparatus for Converting Video Signal to Sound Method for Converting Video Signal into Voice Signal for the Visually Impaired
WO2021101460A1 (en) Navigational assistance system with auditory augmented reality for visually impaired persons
Jones et al. Use of Immersive Audio as an Assistive Technology for the Visually Impaired–A Systematic Review

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination