WO2022188022A1 - Système de perception basé sur l'audition et procédé d'utilisation associé - Google Patents

Système de perception basé sur l'audition et procédé d'utilisation associé Download PDF

Info

Publication number
WO2022188022A1
WO2022188022A1 PCT/CN2021/079689 CN2021079689W WO2022188022A1 WO 2022188022 A1 WO2022188022 A1 WO 2022188022A1 CN 2021079689 W CN2021079689 W CN 2021079689W WO 2022188022 A1 WO2022188022 A1 WO 2022188022A1
Authority
WO
WIPO (PCT)
Prior art keywords
auditory
information
user
instruction
perception system
Prior art date
Application number
PCT/CN2021/079689
Other languages
English (en)
Chinese (zh)
Inventor
曹庆恒
Original Assignee
曹庆恒
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 曹庆恒 filed Critical 曹庆恒
Priority to CN202180000425.0A priority Critical patent/CN113196390B/zh
Priority to PCT/CN2021/079689 priority patent/WO2022188022A1/fr
Publication of WO2022188022A1 publication Critical patent/WO2022188022A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01DMEASURING NOT SPECIALLY ADAPTED FOR A SPECIFIC VARIABLE; ARRANGEMENTS FOR MEASURING TWO OR MORE VARIABLES NOT COVERED IN A SINGLE OTHER SUBCLASS; TARIFF METERING APPARATUS; MEASURING OR TESTING NOT OTHERWISE PROVIDED FOR
    • G01D21/00Measuring or testing not otherwise provided for
    • G01D21/02Measuring two or more variables by means not covered by a single other subclass
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/03Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/225Feedback of the input speech
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2420/00Techniques used stereophonic systems covered by H04S but not provided for in its groups
    • H04S2420/01Enhancing the perception of the sound image or of the spatial distribution using head related transfer functions [HRTF's] or equivalents thereof, e.g. interaural time difference [ITD] or interaural level difference [ILD]

Definitions

  • the present invention relates to the technical field of information and communication, in particular to an auditory-based perception system and a method for using the same.
  • Hearing is the most important way for humans to perceive external information besides vision.
  • the human auditory system uses the information in it to perceive the sound source, information, space, location and environment.
  • the use of hearing to perceive external information has become an important way for them to obtain information.
  • the main purpose of the present invention is to provide a perception system based on hearing and a method of using the same, which can help people better use hearing to perceive external information, enhance the effect of perception, and can help blind people or normal people in low light conditions. Improve the efficiency of walking, finding objects, using computers, and smart devices/smart systems in an environment with
  • the present invention provides an auditory-based perception system, the system includes: a user interaction module, an information acquisition module and an analysis and processing module,
  • the user interaction module is used for receiving an instruction and feeding back the feedback information to the user as an auditory perception signal
  • the information acquisition module is used to acquire information, and the information is used for analysis and processing by the analysis processing module in combination with the instruction;
  • the analysis and processing module is configured to perform analysis and calculation according to the instruction and the information, execute the instruction and/or obtain feedback information.
  • the conversion of feedback information into auditory perception signals is completed by the user interaction module or the analysis processing module.
  • the auditory perception signal represents information through at least one of the frequency, rhythm, melody, interval, orientation, distance, size, height, length, and timbre of the sound.
  • the auditory perception signal includes a speech signal.
  • the user interaction module includes an instruction acquisition module and an auditory perception signal output module
  • the auditory perception signal output module includes at least one of an earphone, a bone conduction earphone, a speaker, a hearing aid, and a brain-computer interface. one.
  • the instruction acquisition module includes a voice recognition device, a sound recognition device, a gesture recognition device, a body motion recognition device, an expression recognition device, a body signal recognition device, a smart wearable device, a smart tablet, At least one of the mobile phone, mouse, keyboard, smart handle, smart cane, smart ring, and smart bracelet.
  • the information acquisition module includes an image sensor, a radar device, a radio frequency identification device, a positioning device, an audio acquisition device, an infrared device, an ultraviolet device, a laser scanner, a metal detector, a temperature sensor device, light sensing device, touch sensing device, air pressure sensor, water pressure sensor, olfactory recognition device, magnetic field detection device, wind detection device, humidity detection device, electric power detection device, speed detection device, altitude detection device, chemical analysis device, radiation detection at least one of the devices.
  • the system can also feed back information to the user through non-auditory signals, or feed back information to the user through a combination of auditory signals and non-auditory signals.
  • the auditory-based perception system further includes a data transmission module, and the data transmission module converts the instruction accepted by the user interaction module and the information acquired by the information acquisition module or analyzed.
  • the instructions and/or information processed by the processing module are sent to the network/system/server/smart device, and the server/network/system/smart device performs analysis and calculation according to the instruction and the information, executes the instruction and/or converts the The result is transmitted to the data transmission module.
  • the auditory-based perception system can also obtain information for analysis and processing in combination with instructions from the network/system/server/smart device through the data transmission module or the information acquisition module.
  • the auditory-based perception system as described above, which is used for assisting walking, assisting movement, sports training, navigation, assisting driving, assisting parking, positioning, location guidance, finding targets, reflecting pictures, reflecting objects, Detection, reconnaissance, exploration, design, maintenance, equipment use, device use, learning, teaching, shopping, office, social, games, entertainment, film and television, computer, health testing, disease diagnosis, surgical treatment, virtual concerts, virtual reality technology at least one of.
  • the above-mentioned auditory-based perception system as an auditory-based operating system, can be used alone or in combination with other systems to operate computers, artificial intelligence, smart devices, and virtual reality devices.
  • the present invention also provides a method for using an auditory-based perception system, the method comprising:
  • the feedback information is converted into an auditory perception signal and fed back to the user.
  • An auditory-based perception system and its using method of the present invention include: a user interaction module, an information acquisition module and an analysis and processing module.
  • the method includes: receiving a user instruction; acquiring information for analyzing and processing the instruction; performing analysis and calculation according to the instruction and the information, executing the instruction and/or obtaining feedback information; if there is feedback information , which converts the feedback information into auditory perception signals and feeds them back to the user.
  • the auditory-based perception system and its using method of the present invention can help people better use their hearing to perceive external information, enhance the effect of perception, and can help blind people or normal people when there is insufficient light or poor light.
  • FIG. 1 is a schematic diagram of an auditory-based perception system according to the first embodiment of the present invention.
  • FIG. 2 is a method flowchart of a method for using an auditory-based perception system according to a second embodiment of the present invention.
  • FIG. 1 is a schematic diagram of an auditory-based perception system according to the first embodiment of the present invention.
  • the auditory-based perception system of the present invention includes: a user interaction module 10 , an information acquisition module 20 and an analysis processing module 30 .
  • the user interaction module 10 is used to receive the instruction and feedback the feedback information to the user as an auditory perception signal; the information acquisition module 20 is used to acquire information, and the information is used for the analysis processing module 30 to analyze and process the instruction; the analysis processing module 30 It is used for analyzing and calculating according to the instruction and the information, executing the instruction and/or obtaining feedback information.
  • the working flow of the auditory-based perception system of the present invention is:
  • the user interaction module 10 receives user instructions.
  • the user interaction module 10 includes an instruction acquisition module and an auditory perception signal output module.
  • the instruction obtaining module is used to obtain the instruction issued by the user.
  • Users can convert relevant information through voice, gestures, movements, expressions, body signals such as body temperature, heartbeat, blood pressure, breathing, etc. as instructions, or operate tablet computers, mobile phones, mice, keyboards, smart handles, smart canes, smart canes, etc.
  • the instruction acquisition module may include a voice recognition device, a voice recognition device, a gesture recognition device, a body motion recognition device, an expression recognition device, a body signal recognition device, smart wearable devices, One of a smart tablet, a mobile phone, a mouse, a keyboard, a smart handle, a smart walking stick, a smart finger ring, and a smart wristband, and may also include other devices suitable for receiving user instructions. Commands can also be issued at timed, periodically, or when certain conditions are triggered, depending on the settings.
  • the auditory perception signal output module is used to feed back information to the user through the auditory perception signal.
  • the auditory perception signal output module may include at least one of earphones, bone conduction earphones, speakers, hearing aids, and brain-computer interfaces, or other suitable devices. or a combination of related devices.
  • the auditory-based perception system of the present invention can also feed back information to the user through non-auditory signals, for example, it can feed back information to the user through a blind tablet, an intelligent blind handle, an intelligent blind mouse, etc., or through a combination of auditory signals and non-auditory signals. Feedback information to users.
  • the auditory perception signal refers to representing information through at least one of the characteristics of sound frequency, rhythm, melody, interval, orientation, distance, size, height, length, and timbre, etc., so that the user can use the signal to perceptual information.
  • spatial orientation information can be conveyed to the user through sound. Since the transmission characteristics of sound waves transmitted by a sound source to a specific orientation can be expressed as a function data set, this function data set representing the sound wave transmission characteristics can be used to process audio signals, so that the audio signals reflect the sound waves transmitted by the sound source to The transmission characteristics of this azimuth.
  • the sound shows the transmission characteristics of the sound source transmitting the sound wave to the azimuth, so that the user can feel the virtual sound source spatial azimuth.
  • the specific orientation may include the direction, location, height, and the like of the sound.
  • the information acquisition module 20 may include an image sensor, a radar device, a radio frequency identification device, a positioning device, an audio acquisition device, an infrared device, an ultraviolet device, a laser scanner, a metal detector, and a temperature sensing device.
  • light sensing device touch sensing device, air pressure sensor, water pressure sensor, olfactory recognition device, magnetic field detection device, wind detection device, humidity detection device, power detection device, speed detection device, altitude detection device, chemical analysis device, radiation detection device At least one of them may also be used by other suitable devices to obtain relevant information.
  • the analysis and processing module 30 performs analysis and calculation according to the instruction and the information, executes the instruction and/or obtains feedback information.
  • a three-dimensional space model is established for the blind walking instruction and the obtained information such as the user's current position, destination position, and obstacles.
  • the space model may also include information related to the time dimension.
  • the model may be a mapping entity A model of a scene/object; it can also be a model that maps a virtual scene/object, such as an operating system, an operation interface, a game, a virtual system, etc.; it can also be a combination of entity and virtual.
  • the feedback information is converted into an auditory perception signal, which is then fed back to the user by the user interaction module 10 .
  • Converting the feedback information into auditory perception signals is completed by the user interaction module 10 or the analysis processing module 30, and may also be completed by other devices or devices. For example, the route of the blind person walking and the guidance and reminders given according to the actual situation are converted into auditory perception signals and fed back to the user.
  • an auditory perception signal that the sound source is at the target position can be sent out, so that the user can perceive the location information through the auditory perception signal, so as to walk to the location.
  • the auditory perception signal is adjusted as the distance between the user and the target position changes, so that the user can walk to the position. It can be that every time the user moves the position, according to the change of the user's current position and the target position, the sound source sends out an auditory perception signal at the target position that is transmitted to the user's current position, so that the user can continuously perceive the target position during the movement process. Finally successfully reached the target position. If there are obstacles in the route, you can use other sound signals with different frequencies, rhythm, melody, interval, bearing, distance, size, height, length and timbre to send out the auditory perception of information such as obstacle location, distance, height, and danger. signal, so that the user can perceive the position, distance, height, danger and other information of the obstacle through the auditory perception signal, so as to bypass the obstacle.
  • the meaning of sound signals and their combinations of different frequencies, rhythms, melody, intervals, orientations, distances, sizes, heights, lengths and timbres used to represent destinations and obstacles, targets, objects, content, and their combinations can be preset, And it is a signal that the user has been trained to distinguish and understand the meaning of, and it can also be an existing sound signal that can contain information, such as a speech signal of an existing language or other regular sound signals.
  • an auditory-based perception system can set and/or train the definitions of sound signals of different frequencies, rhythms, melody, intervals, orientations, distances, sizes, pitches, lengths, and timbres, such as object definition, target definition, orientation Definition, definition of distance, definition of color, definition of temperature, definition of high and low, definition of warning/danger signal, definition of operation signal, definition of operation result, etc.
  • the auditory-based perception system of the present invention can also feed back information to the user by combining the auditory perception signal with other signals, for example, the auditory perception signal can be combined with the Braille signal to feed back information.
  • the auditory-based perception system of the present invention can be used for various purposes, such as walking, finding objects, games, computers, virtual concerts, virtual reality technology, etc.
  • amblyopia, myopia, hyperopia, presbyopia, eye fatigue, etc. or when it is inconvenient to watch carefully, such as driving, you can easily use the computer based on the auditory perception system to paint, play, compose, write, work, and study.
  • Etc. enhance the importance of hearing in these fields, enhance the effect of people's use of auditory perception information, so that humans, especially the blind, can live more conveniently.
  • Step 1 The user issues an instruction to find the object.
  • the user may issue an instruction through voice, and the system acquires the user's voice instruction through the microphone worn on the user and recognizes the instruction. Users can also issue commands through gestures, actions, or other means.
  • the user's instruction can be obtained through a related device such as a camera.
  • Step 2 After the system obtains the user instruction, it obtains the relevant information through the information obtaining module 20 . First, find the item you are looking for, and determine the location of the item; second, determine the current location of the user; and then obtain the surrounding environment information.
  • the image data in the space can be obtained from different angles through multiple image sensors for spatial modeling and position calculation.
  • the image sensor can be set in a suitable position in the room, or can be worn on the user's body, for example: the image can be The sensor, the microphone for obtaining instructions, and the earphone for feeding back the auditory perception signal to the user are integrated into a head-worn portable device, which is worn on the user's head.
  • Step 3 Perform analysis and calculation according to the instruction and the information to obtain feedback information.
  • a spatial model is established through the acquired information, mainly through multi-angle spatial image data, as well as the position information and related size of the objects in the space. It is also possible to obtain the established spatial model through the network or other means, or to modify the existing spatial model to obtain a new spatial model that conforms to the actual situation. After that, use the space model and the location information of users and items to plan a suitable fetching path.
  • the acoustic model is established according to the spatial model, and the correlation function of the acoustic wave transmission is obtained, which is used to calculate the auditory perception signal.
  • the beam tracking algorithm can be used to establish the acoustic model, calculate the intersection of the relevant beam and the space, and obtain the correlation function of the sound wave transmission.
  • the audio signal processed by this correlation function is converted into sound through the playback device, the sound is The transmission characteristics of the sound wave transmitted by the sound source to the azimuth are displayed, so that the user can feel the virtual sound source spatial azimuth.
  • the feedback information is converted into auditory perception signals and fed back to the user.
  • a virtual sound source is produced at the location of the item, and after processing the correlation function of sound wave transmission, the auditory perception signal is obtained, which is output to the user through the earphone. If the planned fetching path is not a straight path, the path can be divided into multiple straight segments, and then, a sound source is virtualized at the end point of the first straight segment, and the auditory perception signal is calculated and fed back to the user. After the end point of each straight line segment, enter the second straight line segment.
  • a sound source is virtualized at the end point of the second straight line segment, and the auditory perception signal is calculated and fed back to the user to guide the user to the point, and so on until reaching the The location of the item being sought.
  • the corresponding auditory perception signal can be set and fed back to the user, so that the user can be continuously corrected and reminded.
  • the auditory-based perception system of the present invention can also be used for virtual concerts.
  • the scene of the virtual concert venue the acoustic model is established.
  • each instrument, part, etc. is virtualized into different sound sources, and these sound sources can be located in different positions.
  • the correlation functions of the sound wave transmission from different sound sources to the user's location are calculated separately.
  • the music generated by each virtual sound source is processed by the correlation function of sound wave transmission and then superimposed to obtain the final auditory perception signal of the concert and output to the user.
  • the correlation function for calculating acoustic wave transmission may be a set of head related transfer function data (Head Related Transfer Function, HRTF), a set of interaural time difference data (Interaural Time Difference, ITD), and a set of interaural intensity difference data (IID) Any appropriate set of data that can characterize the transmission characteristics of sound waves emitted by a sound source to a certain azimuth.
  • ITD refers to the time difference between the sound signal reaching both ears due to the distance difference between the sound source and the left and right ears.
  • IID refers to the difference in intensity of the acoustic signal when it reaches both ears due to the difference in the distance between the sound source and the left and right ears.
  • Both ITD and IID are functions of sound source location and sound wave frequency.
  • HRTF is the acoustic transfer function from the sound source to both ears in the free field, which is used to describe the characteristic changes that occur when the sound wave emitted by the sound source in the free sound field is incident at a certain point in the ear canal at a certain angle.
  • HRTF is a function of the location of the sound source, the frequency of the sound wave, and the shape and properties of the body surface.
  • the unit impulse response from the sound source to the anthropometric point is called the Head Related Impulse Response (HRIR).
  • HRTF is the Fourier transform of HRIR.
  • the audio signals can respectively represent the sound waves transmitted by the sound source to multiple specific azimuths. Azimuth transmission characteristics.
  • a virtual auditory environment can be constructed. On this basis, if the user's real physical orientation is projected as a specific orientation in the virtual auditory environment, the difference between the user's different real physical orientations and the different specific orientations in the virtual auditory environment will occur. By establishing a corresponding relationship between them, users can hear sound effects that are consistent with their own physical orientations according to their own physical orientations.
  • It can be set to allow the user to move the position.
  • the correlation function of the sound wave transmission of different sound sources is recalculated.
  • the user can really enjoy the concert on the spot as if he were there. Users can also issue instructions to adjust the sounding position, volume, etc. of certain instruments and parts, and enjoy their own concerts at will.
  • the auditory-based perception system as described above, which is used for assisting walking, assisting movement, sports training, navigation, assisting driving, assisting parking, positioning, location guidance, finding targets, reflecting pictures, reflecting objects, Detection, reconnaissance, exploration, design, maintenance, equipment use, device use, learning, teaching, shopping, office, social, games, entertainment, film and television, computer, health testing, disease diagnosis, surgical treatment, virtual concerts, virtual reality technology at least one of. For example: in the process of walking, remind the user of the route, the change of road conditions and the specific location of the obstacle through the position, distance, signal type, etc.
  • the auditory signal in sports training, through the auditory signal to remind the athlete whether the angle and distance of the action meet the training requirements Or guide the athlete’s actions; in assisted driving, the driver/pilot’s route or route and the location and distance of related objects can be reminded through auditory signals; position guidance can enable users to accurately grasp the target position through auditory signals, such as when inserting a key into the keyhole , the auditory signal can reflect the relative orientation and distance between the keyhole and the key, and it can be quickly aligned even if it is invisible/invisible; the target can be found by radar or infrared equipment.
  • the auditory signal feeds back the relative orientation and distance of the target/person to the user; when it is used to reflect the picture, the picture to be reflected can be obtained through the image sensor, and then the image recognition software parses the picture content into images such as points, lines, graphics, colors, etc.
  • the system converts the relevant information elements and their position information into auditory signals, so that the user can receive the relevant information of the screen according to the auditory signal, or the system can feedback the screen information of the position through the auditory information according to the position specified by the user. For the user, with the movement of the user-specified position in different positions on the screen, the system can transmit the screen information to the user through auditory information.
  • the screen here can be a real screen or a virtual screen stored in the system; When designing, it can increase the information feedback of the designer in terms of space, and have a more three-dimensional and intuitive feeling about the scheme involved.
  • the above application can be achieved by the auditory-based perception system of the present invention alone or in combination with other systems and devices.
  • the auditory-based perception system of the present invention the above-mentioned auditory-based perception system, as an auditory-based operating system, the auditory-based perception system can be used alone or in combination with other systems for operating computers, artificial intelligence, and smart devices. , virtual reality device or other suitable device.
  • Existing computer systems usually use a video operation interface. For blind people or people with poor eyesight, or when ordinary people use it at a distance, the information fed back by the computer system cannot be well transmitted to users.
  • the existing visual-based interface is limited in the amount of information and the form of information, and many times, it cannot fully, vividly and accurately accept instructions and feedback information.
  • the feedback information is converted into auditory perception signals and fed back to the user, and the user's instructions can be received in multiple dimensions, so that the blind or poor-sighted people, or ordinary people can easily obtain the information from a long distance.
  • the information fed back by the computer system can also increase the way of existing computer interaction and enhance the effect of interaction, reduce the difficulty of relevant personnel using the computer system and control equipment, and improve the effect of computer use.
  • the computer system based on the auditory perception system can realize the operation of position, route, quantity, size, temperature, time, degree, shape, state, and object, as well as the existing computer system, and can also include object recognition/discrimination/expansion, object Movement of virtual locations, object modification/deletion, generation, alteration, etc. It is also possible to use the auditory-based perception system of the present invention to control the device to complete the operation of the target object, for example: realize the operation of manipulators, robots, smart furniture, unmanned vehicles, drones through the auditory-based perception system of the present invention. , electronic paper books, etc.
  • a computer system combined with an auditory-based perception system can increase the spatial dimension of information and other dimensions of information that can be carried by hearing on the basis of existing computer applications, greatly improving the application efficiency and use experience of computers.
  • the auditory-based perception system of the present invention may further include a data transmission module, the data transmission module transmits the instructions received by the user interaction module and the information acquired by the information acquisition module or the instructions processed by the analysis processing module and/or Or the information is sent to the server/network/system/smart device, the server/network/system/smart device performs analysis and calculation according to the instruction and the information, executes the instruction and/or transmits the result to the data transmission module.
  • Specific networks/systems/smart devices include: Internet, Internet of Things, satellite networks, local area networks, smart office systems, smart home systems, smart phones, smart TVs, smart cars, smart roads, smart cities, drones, smart robots, smart Kitchen, smart clothing, etc.
  • the data transmission module the data is transmitted to the server/network/system/smart device for analysis and calculation, which can increase the data processing capability of the auditory-based perception system of the present invention, and can also expand the application range of the auditory-based perception system of the present invention.
  • the calculation amount of the analysis processing module can be reduced, the hardware requirement for the analysis processing module can be reduced, and the cost and weight of the auditory-based perception system of the present invention can be reduced.
  • the auditory-based perception system of the present invention can also obtain information from the Internet, the Internet of Things, or other information systems, servers, and smart devices through a data transmission module or an information acquisition module, for analysis and calculation in combination with instructions.
  • Specific sources of information can include: Internet, Internet of Things, satellite networks, local area networks, smart office systems, smart home systems, smart phones, smart speakers, smart cars, smart roads, smart cities, drones, smart robots, smart kitchens, smart Clothing, smart glasses, etc.
  • the model can be established by establishing a complete model by a single system, or by establishing a part of the model by multiple systems and devices based on unified signal/information standards, and then by one or more of the systems.
  • servers can be integrated into a complete set of models.
  • the information needed to build the model can be obtained through smart appliances, smart furniture, smart houses (homes, wards, hospitals, schools, factories), smart roads, smart phones, smart speakers, smart cars, smart city systems, image sensors, A positioning device and an audio acquisition device are used to obtain it.
  • FIG. 2 is a method flowchart of a method for using an auditory-based perception system according to a second embodiment of the present invention.
  • the use method of the auditory-based perception system of the present invention includes:
  • S3 perform analysis and calculation according to the instruction and the information, execute the instruction and/or obtain feedback information
  • the method of using an auditory-based perception system of the present invention corresponds to the technical features of an auditory-based perception system of the present invention. Reference can be made to the foregoing description of the auditory-based perception system, which will not be repeated here.
  • an auditory-based perception system and a method for using the same of the present invention include: a user interaction module, an information acquisition module, and an analysis and processing module.
  • the method includes: receiving a user instruction; acquiring information for analyzing and processing the instruction; performing analysis and calculation according to the instruction and the information, executing the instruction and/or obtaining feedback information; if there is feedback information , which converts the feedback information into auditory perception signals and feeds them back to the user.
  • the auditory-based perception system and its using method of the present invention can help people better use their hearing to perceive external information, enhance the effect of perception, and can help blind people or normal people when there is insufficient light or poor light.

Abstract

L'invention concerne un système de perception basé sur l'audition et un procédé d'utilisation associé. Le système comprend : un module d'interaction utilisateur (10), un module d'acquisition d'informations (20), ainsi qu'un module d'analyse et de traitement (30). Le procédé consiste à : recevoir une instruction d'utilisateur (S1) ; acquérir des informations à des fins d'analyse et de traitement en combinaison avec l'instruction (S2) ; effectuer une analyse et un calcul selon l'instruction et les informations, exécuter l'instruction et/ou obtenir des informations de rétroaction (S3) ; et s'il existe des informations de rétroaction, convertir les informations de rétroaction en un signal de perception auditive et les renvoyer à un utilisateur (S4). Le système de perception basé sur l'audition et le procédé d'utilisation associé peuvent aider à mieux utiliser l'audition pour percevoir des informations du monde extérieur, améliorer l'effet de perception et augmenter l'efficacité de la marche, de la recherche d'un objet, de l'utilisation d'un ordinateur et d'un dispositif intelligent/système intelligent, etc. lorsqu'un utilisateur se trouve dans des conditions, telles qu'une faible lumière, une mauvaise lumière, une lumière trop forte, une amblyopie, une myopie, une hypermétropie, une presbytie et une fatigue oculaire, ou lorsqu'il n'est pas pratique pour l'utilisateur de regarder attentivement, par exemple pendant la conduite.
PCT/CN2021/079689 2021-03-09 2021-03-09 Système de perception basé sur l'audition et procédé d'utilisation associé WO2022188022A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202180000425.0A CN113196390B (zh) 2021-03-09 2021-03-09 一种基于听觉的感知系统及其使用方法
PCT/CN2021/079689 WO2022188022A1 (fr) 2021-03-09 2021-03-09 Système de perception basé sur l'audition et procédé d'utilisation associé

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/079689 WO2022188022A1 (fr) 2021-03-09 2021-03-09 Système de perception basé sur l'audition et procédé d'utilisation associé

Publications (1)

Publication Number Publication Date
WO2022188022A1 true WO2022188022A1 (fr) 2022-09-15

Family

ID=76976987

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/079689 WO2022188022A1 (fr) 2021-03-09 2021-03-09 Système de perception basé sur l'audition et procédé d'utilisation associé

Country Status (2)

Country Link
CN (1) CN113196390B (fr)
WO (1) WO2022188022A1 (fr)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023019376A1 (fr) * 2021-08-15 2023-02-23 曹庆恒 Système de détection tactile et son procédé d'utilisation
CN113975585A (zh) * 2021-09-10 2022-01-28 袁穗薇 一种儿童多元化训练方法
CN113934296A (zh) * 2021-10-11 2022-01-14 北京理工大学 一种基于视觉感知的盲人家电使用交互式辅助系统

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN203825313U (zh) * 2013-12-16 2014-09-10 智博锐视(北京)科技有限公司 盲人导航眼镜
CN104983511A (zh) * 2015-05-18 2015-10-21 上海交通大学 针对全盲视觉障碍者的语音帮助智能眼镜系统
CN106214436A (zh) * 2016-07-22 2016-12-14 上海师范大学 一种基于手机端的智能导盲系统及其导盲方法
US20170303052A1 (en) * 2016-04-18 2017-10-19 Olive Devices LLC Wearable auditory feedback device
EP3432606A1 (fr) * 2018-03-09 2019-01-23 Oticon A/s Système d'aide auditive
CN109831631A (zh) * 2019-01-04 2019-05-31 华南理工大学 一种基于视觉注意特性的视-听觉转换导盲方法
CN110559127A (zh) * 2019-08-27 2019-12-13 上海交通大学 基于听觉与触觉引导的智能助盲系统及方法
CN111643324A (zh) * 2020-07-13 2020-09-11 江苏中科智能制造研究院有限公司 一种智能盲人眼镜

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN203825313U (zh) * 2013-12-16 2014-09-10 智博锐视(北京)科技有限公司 盲人导航眼镜
CN104983511A (zh) * 2015-05-18 2015-10-21 上海交通大学 针对全盲视觉障碍者的语音帮助智能眼镜系统
US20170303052A1 (en) * 2016-04-18 2017-10-19 Olive Devices LLC Wearable auditory feedback device
CN106214436A (zh) * 2016-07-22 2016-12-14 上海师范大学 一种基于手机端的智能导盲系统及其导盲方法
EP3432606A1 (fr) * 2018-03-09 2019-01-23 Oticon A/s Système d'aide auditive
CN109831631A (zh) * 2019-01-04 2019-05-31 华南理工大学 一种基于视觉注意特性的视-听觉转换导盲方法
CN110559127A (zh) * 2019-08-27 2019-12-13 上海交通大学 基于听觉与触觉引导的智能助盲系统及方法
CN111643324A (zh) * 2020-07-13 2020-09-11 江苏中科智能制造研究院有限公司 一种智能盲人眼镜

Also Published As

Publication number Publication date
CN113196390A (zh) 2021-07-30
CN113196390B (zh) 2024-04-05

Similar Documents

Publication Publication Date Title
WO2022188022A1 (fr) Système de perception basé sur l'audition et procédé d'utilisation associé
AU2023200677B2 (en) System and method for augmented and virtual reality
Hu et al. An overview of assistive devices for blind and visually impaired people
Csapó et al. A survey of assistive technologies and applications for blind users on mobile platforms: a review and foundation for research
CN104011788A (zh) 用于增强和虚拟现实的系统和方法
Schwarze et al. A camera-based mobility aid for visually impaired people
Hub et al. Interactive tracking of movable objects for the blind on the basis of environment models and perception-oriented object recognition methods
Giudice et al. Spatial learning and navigation using a virtual verbal display
WO2023019376A1 (fr) Système de détection tactile et son procédé d'utilisation
May et al. Spotlights and soundscapes: On the design of mixed reality auditory environments for persons with visual impairment
Du et al. Human–robot collaborative control in a virtual-reality-based telepresence system
Wang et al. A survey of 17 indoor travel assistance systems for blind and visually impaired people
Mazuryk et al. History, applications, technology and future
Zhang et al. A survey of immersive visualization: Focus on perception and interaction
D. Gomez et al. See ColOr: an extended sensory substitution device for the visually impaired
Mihelj et al. Introduction to virtual reality
Thalmann et al. Virtual reality software and technology
Röber et al. Interacting With Sound: An Interaction Paradigm for Virtual Auditory Worlds.
Olivetti Belardinelli et al. Sonification of spatial information: audio-tactile exploration strategies by normal and blind subjects
Sardana et al. Introducing locus: a nime for immersive exocentric aural environments
Jones et al. Use of Immersive Audio as an Assistive Technology for the Visually Impaired–A Systematic Review
Magnenat-Thalmann et al. Virtual reality software and technology
Luna Introduction to Virtual Reality
Hub et al. Real and virtual explorations of the environment and interactive tracking of movable objects for the blind on the basis of tactile-acoustical maps and 3D environment models
Liu et al. Augmented Reality Powers a Cognitive Prosthesis for the Blind

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21929504

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21929504

Country of ref document: EP

Kind code of ref document: A1