CN113163293A - Environment sound simulation system and method based on wireless intelligent earphone - Google Patents
Environment sound simulation system and method based on wireless intelligent earphone Download PDFInfo
- Publication number
- CN113163293A CN113163293A CN202110501438.8A CN202110501438A CN113163293A CN 113163293 A CN113163293 A CN 113163293A CN 202110501438 A CN202110501438 A CN 202110501438A CN 113163293 A CN113163293 A CN 113163293A
- Authority
- CN
- China
- Prior art keywords
- sound wave
- unit
- sound
- wireless
- processor
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000004088 simulation Methods 0.000 title claims abstract description 46
- 238000000034 method Methods 0.000 title claims abstract description 19
- 238000001514 detection method Methods 0.000 claims abstract description 61
- 230000008569 process Effects 0.000 claims abstract description 5
- 230000003993 interaction Effects 0.000 claims description 5
- 230000009471 action Effects 0.000 abstract description 4
- 230000007613 environmental effect Effects 0.000 abstract description 4
- 230000008859 change Effects 0.000 description 7
- 230000000694 effects Effects 0.000 description 5
- 230000006870 function Effects 0.000 description 5
- 230000033001 locomotion Effects 0.000 description 4
- 230000000007 visual effect Effects 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 3
- 230000001360 synchronised effect Effects 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 238000005094 computer simulation Methods 0.000 description 2
- 210000005069 ears Anatomy 0.000 description 2
- 230000001965 increasing effect Effects 0.000 description 2
- 238000005293 physical law Methods 0.000 description 2
- 230000005236 sound signal Effects 0.000 description 2
- 230000001755 vocal effect Effects 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 238000007654 immersion Methods 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 239000000178 monomer Substances 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R1/00—Details of transducers, loudspeakers or microphones
- H04R1/10—Earpieces; Attachments therefor ; Earphones; Monophonic headphones
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S5/00—Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations
- G01S5/18—Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations using ultrasonic, sonic, or infrasonic waves
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R1/00—Details of transducers, loudspeakers or microphones
- H04R1/20—Arrangements for obtaining desired frequency or directional characteristics
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R3/00—Circuits for transducers, loudspeakers or microphones
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2201/00—Details of transducers, loudspeakers or microphones covered by H04R1/00 but not provided for in any of its subgroups
- H04R2201/10—Details of earpieces, attachments therefor, earphones or monophonic headphones covered by H04R1/10 but not provided for in any of its subgroups
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2203/00—Details of circuits for transducers, loudspeakers or microphones covered by H04R3/00 but not provided for in any of its subgroups
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Acoustics & Sound (AREA)
- Signal Processing (AREA)
- General Physics & Mathematics (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Circuit For Audible Band Transducer (AREA)
Abstract
The invention discloses an environment sound simulation system based on a wireless intelligent earphone, which comprises: the loudspeaker unit is configured on the wireless earphone and used for receiving the electric signal, converting the electric signal into audio and playing the audio; the sound wave unit is used for sending out a sound wave positioning signal; the sound wave detection unit is used for detecting direct sound wave positioning signals sent by the sound wave unit; the processor is communicated with the loudspeaker unit, the sound wave unit and the sound wave detection unit, processes the position information acquired by the sound wave unit and the sound wave detection unit and is used for controlling the loudspeaker unit; the environment simulation unit is communicated with the processor and is used for simulating the position of the sound source and providing sound source position information for the processor; the sound wave unit or the sound wave detection unit is configured on the wireless earphone. The invention provides the simulated environmental sound which changes along with the action state of the user by the sound wave detection unit and the sound wave unit which are respectively arranged on the wireless earphone and the scene/equipment and the real-time sound source position simulated by the environment simulation unit.
Description
Technical Field
The invention relates to an environmental sound simulation system and method based on a wireless intelligent earphone, and belongs to the technical field of sound waves.
Background
Virtual Reality (also called smart environment) is a new practical technology developed in the 20 th century, and includes computer, electronic information, and simulation technologies, and the basic implementation manner of the Virtual Reality is that a computer simulates a Virtual environment so as to give people a sense of environmental immersion. The essence of the method is that data in real life are utilized, electronic signals generated through a computer technology are combined with various output devices to be converted into phenomena which can be felt by people, the phenomena can be real and true objects in reality, and can also be substances which can not be seen by naked eyes, and the phenomena are expressed through a three-dimensional model.
Virtual reality's final objective is to realize human-computer interaction in the true sense, when virtual human all perception (sense of hearing, vision, sense of touch, taste, smell etc.), make the people in the operation process, can receive at will, often combine the reality place to carry out dynamic simulation, from the visual dynamic action that combines the user, accomplish the emulation, at this moment, the user who wears virtual equipment, the simulation of environmental sound is all from in the earphone, no matter in the simulation environment sound source is from the place, the sound that the user who wears virtual equipment heard all comes from the earphone, its advantage is that the sound source is not hindered by the scene environment, it is direct lossless to convey, but based on virtual reality's final objective, earphone propagation sound has a shortcoming, namely: the sound is preset, and although the user can interact with the visual scene, it is difficult to interact with the scene sound in real time.
Disclosure of Invention
The invention aims to provide an ambient sound simulation system and method based on a wireless intelligent earphone, and the simulation system and method solve the problems that in the prior art, a user wearing listening equipment is difficult to realize real-time interaction with audio information, and the ambient sound simulation is difficult to achieve dynamic simulation; still solved current wireless earphone audio playback function singleness, intelligent poor problem.
In order to achieve the purpose, the invention adopts the technical scheme that: an ambient sound emulation system based on wireless smart headsets, comprising:
the loudspeaker unit is configured on the wireless earphone and used for receiving the electric signal, converting the electric signal into audio and playing the audio;
the sound wave unit is used for sending out a sound wave positioning signal;
the sound wave detection unit is used for detecting direct sound wave positioning signals sent by the sound wave unit;
the processor is communicated with the loudspeaker unit, the sound wave unit and the sound wave detection unit, processes the position information acquired by the sound wave unit and the sound wave detection unit and is used for controlling the loudspeaker unit;
the environment simulation unit is communicated with the processor and is used for simulating the position of the sound source and providing sound source position information for the processor;
the sound wave unit or the sound wave detection unit is configured on the wireless earphone.
The further improved scheme in the technical scheme is as follows:
1. in the above solution, when the sound wave unit is configured on the wireless earphone, the sound wave unit is a speaker unit of the wireless earphone.
2. In the above solution, when the sound wave unit is configured on the wireless earphone, each wireless earphone unit has at least one sound wave unit.
3. In the above scheme, when the sound wave detection unit is configured on the wireless headset, the sound wave detection unit is a microphone module carried by the wireless headset.
4. In the above scheme, when the sound wave detection unit is configured on the wireless earphone, each wireless earphone unit is provided with at least one sound wave detection unit.
5. In the above scheme, the position information includes a relative distance and a relative angle between the acoustic wave unit and the acoustic wave detection unit.
6. In the above scheme, the sound wave positioning signal is an ultrasonic positioning signal.
In order to achieve the above purpose, the invention also provides a technical scheme that: an environment sound simulation method based on a wireless intelligent earphone comprises the following steps:
after a user wears the wireless earphone, the sound wave unit transmits a sound wave positioning signal, the sound wave detection unit detects the direct sound wave positioning signal and transmits data to the processor, and the processor calculates the relative position of the sound wave unit and the sound wave detection unit and determines the initial position of the wireless earphone;
the environment simulation unit simulates the position of a sound source in a scene by matching the initial position of a user in an actual scene according to the relative position of a virtual sounding object and the user in virtual reality;
the processor judges the audio frequency which can be heard by the user in the actual scene according to the relative position of the sound source position and the initial position, and the processor controls the loudspeaker unit to emit the simulated audio frequency.
1. In the above scheme, the sound wave detection unit is installed on the wireless earphone, and the sound wave unit array is distributed at the top of the scene.
2. In the above scheme, when the wireless headset is used alone, the sound source position simulated by the environment simulation unit is a freely set item.
Due to the application of the technical scheme, compared with the prior art, the invention has the following advantages:
1. the environment sound simulation system and method based on the wireless intelligent earphone receive and transmit sound wave positioning signals in real time through the sound wave detection unit and the sound wave unit which are respectively arranged on the wireless earphone and a scene/device, obtain the real-time position and dynamic change of the wireless earphone in the scene, and enable the processor to control the loudspeaker to adjust the volume, frequency, sound range and the like of the played audio in real time by matching with the real-time sound source position simulated by the environment simulation unit, thereby providing the simulation environment sound which changes along with the action state of the user, enhancing the man-machine interaction and improving the user experience.
2. The invention relates to an environment sound simulation system and method based on a wireless intelligent earphone, which are characterized in that along with the increasing of the number of sound wave units and/or sound wave detection units, positioning information is from relative distance and relative angle to absolute positioning, the dynamic change and motion trail of a user in a three-dimensional space are completely captured, even if the user shakes the head left and right, the sound heard by two ears is changed differently, and the simulation precision of a simulation system is further improved.
3. The invention relates to an environment sound simulation system and method based on a wireless intelligent earphone, which are characterized in that when the earphone is worn for use, a sound source position is positioned to a playing device through an environment simulation unit: the intelligent terminals such as mobile phones, sound equipment, televisions and the like or other virtual positions enable users to be matched with any equipment to achieve auditory environment simulation effect, and applicability is improved.
Drawings
Fig. 1 is a schematic diagram of a working module of an ambient sound simulation system based on a wireless smart headset in embodiment 1 of the present invention.
Description of reference numerals: 1. a wireless headset; 2. a speaker unit; 3. an acoustic wave detection unit; 4. an acoustic wave unit; 5. a processor; 6. and an environment simulation unit.
Detailed Description
In the following embodiment, the time of the acoustic wave detection module is synchronized with that of the acoustic wave module, and if the acoustic wave detection module is not synchronized with the acoustic wave module, the acoustic wave detection module is preferably synchronized and then interacted with the acoustic wave module; if the situation that the synchronization is difficult exists, the accurate flight time can be obtained by screening out the time difference through twice signal receiving and transmitting, so that the relatively accurate position information is solved, and the details of (202011329162.1) in the past application document are provided, and are not described herein any more.
Example 1: an environment sound simulation system based on wireless intelligent earphone, referring to fig. 1, includes a speaker unit 2 disposed on the wireless earphone 1, where the speaker unit 2 receives the electric signal sent by the equipment, converts it into audio playing, and transmits the sound signal to the user; in addition, the speaker unit 2 is not limited to the speaker body, but also includes other modules supporting the speaker to adjust the volume, tone, range and frequency, and the actual adjusting effect is determined according to the requirement or the existing devices of the earphone.
The sound wave detection device comprises a sound wave unit 4 and sound wave detection units 3, wherein the sound wave unit 4 needs to be capable of sending out ultrasonic positioning signals, the sound wave unit 4 is a unit module with a loudspeaker sound production function, the sound wave detection units 3 need to be capable of receiving direct ultrasonic positioning signals, the sound wave detection units 3 are unit modules with a microphone listening function, the sound wave units 4 are arranged on the wall of the top surface of a scene and distributed in a dot matrix manner, the position and relative distance of each sound wave unit 4 in the scene are known, and the number of the sound wave units is at least 3; here, the sound wave detection unit 3 is a microphone module carried by the wireless earphone 1, the wireless earphones 1 on both sides are respectively provided with a sound wave detection unit 3, in the using process, the sound wave unit 4 sends out ultrasonic positioning signals in real time and is respectively received by the sound wave detection units 3 on both sides, the sound wave unit 4 has the transmitting time of the ultrasonic positioning signals, the sound wave detection unit 3 has the time of receiving the ultrasonic positioning signals, the distance from one sound wave unit 4 to one sound wave detection unit 3 can be calculated through the flight time, based on any 3 sound wave units 3, the distance from the sound wave detection units 3 to the sound wave detection units is taken as a radius to be taken as a sphere, and the specific position of the sound wave detection unit 3 in the three-dimensional space can be found after the intersection points of the impossible positions are eliminated.
The system further comprises an environment simulation unit 6 and a processor 5, wherein the environment simulation unit 6 is configured for simulating the position of the sound source, so that the position information of the sound source is improved for the processor 5, meanwhile, after the processor 5 communicated with the sound wave unit 4 and the sound wave detection unit 3 receives the data transmitted by the sound wave unit 4 and the sound wave detection unit 3, the position of the user in the three-dimensional space can be calculated in real time, and based on the position of the user and the simulated sound source position information, the processor 5 obtains what the sound that the user can hear from the position at the moment according to the physical law, so that the loudspeaker unit 2 connected with the processor can be driven to simulate and emit similar audio.
An environment sound simulation method based on a wireless intelligent earphone comprises the following steps:
s1: after a user wears the wireless earphone 1, ultrasonic positioning signals are transmitted by the sound wave units 4 arranged at the top, after the sound wave detection unit 3 on the wireless earphone 1 detects the direct ultrasonic positioning signals, collected data are transmitted to the processor 5, the processor 5 calculates the relative distance between the sound wave unit 4 and the sound wave detection unit 3 according to time information, calculates the accurate position of the sound wave detection unit according to the relative distances, and records the initial position of the sound wave detection unit, wherein the position is approximately equal to the ear position of the user.
S2: the environment simulation unit 6 simulates the position of a sound source at a position with the same relative position in an actual scene according to the relative position of the virtual generation object and the user in the virtual reality, and transmits the position information of the simulated sound source position to the processor 5.
S3: the processor 5 judges the audio characteristics that can be heard by the user in the actual scene according to the relative position of the simulated sound source position and the initial position, and controls the loudspeaker unit 2 to emit simulated audio;
the judgment type is determined according to the actual situation and the earphone component, and comprises volume, frequency and the like.
At this moment, visual equipment such as VR glasses, when the user walks about in virtual environment or when interacting with virtual visual environment, can catch user's dynamic change and orbit, when being close to virtual vocal object, the volume in the wireless earphone progressively increases promptly, when keeping away from virtual vocal object, the volume in the wireless earphone progressively reduces to give and be used for more real experience and interactive effect.
Example 2: an environment sound simulation system based on a wireless intelligent earphone comprises a loudspeaker unit 2 arranged on a wireless earphone 1, wherein the loudspeaker unit 2 receives an electric signal sent by equipment, converts the electric signal into an audio frequency to be played and conveys a sound signal to a user; in addition, the speaker unit 2 is not limited to the speaker body, but also includes other modules supporting the speaker to adjust the volume, tone, range and frequency, and the actual adjusting effect is determined according to the requirement or the existing devices of the earphone.
The smart phone also comprises a sound wave unit 4 and a sound wave detection unit 3, wherein the sound wave unit 4 needs to be capable of sending out ultrasonic positioning signals, the sound wave unit 4 is a unit module with a loudspeaker sounding function, the sound wave detection unit 3 needs to be capable of receiving direct ultrasonic positioning signals, the sound wave detection unit 3 is a unit module with a microphone listening function, and the sound wave unit 4 is arranged on the smart phone; here, sound wave detecting element 3 is the microphone module of taking certainly on wireless earphone 1, have a sound wave detecting element 3 respectively on the wireless earphone 1 monomer of both sides, in the use, sound wave unit 4 on the smart mobile phone sends ultrasonic wave locating signal in real time and is received by the sound wave detecting element 3 of both sides respectively, sound wave unit 4 has the transmission moment of ultrasonic wave locating signal, sound wave detecting element 3 has the moment of receiving ultrasonic wave locating signal, can solve out the distance of a sound wave unit 4 to a sound wave detecting element 3 through the flight time, based on two flight distances and the invariant distance of two earphones, can solve out contained angle and situation of change.
The system further comprises an environment simulation unit 6 and a processor 5, wherein the environment simulation unit 6 is configured for simulating the position of the sound source, so that the position information of the sound source is improved for the processor 5, meanwhile, after the processor 5 communicated with the sound wave unit 4 and the sound wave detection unit 3 receives the data transmitted by the sound wave unit 4 and the sound wave detection unit 3, the position of the user in the three-dimensional space can be calculated in real time, and based on the position of the user and the simulated sound source position information, the processor 5 obtains what the sound that the user can hear from the position at the moment according to the physical law, so that the loudspeaker unit 2 connected with the processor can be driven to simulate and emit similar audio.
An environment sound simulation method based on a wireless intelligent earphone comprises the following steps:
s1: after a user wears the wireless earphone 1, ultrasonic positioning signals are emitted from a single sound wave unit 4 arranged on the smart phone, after a sound wave detection unit 3 on the wireless earphone 1 detects the direct ultrasonic positioning signals, collected data are transmitted to the processor 5, the processor 5 calculates the relative distance between the sound wave unit 4 and the sound wave detection unit 3 according to time information, the included angle is calculated according to 2 relative distances, and 2 relative distances and the included angle at the moment are recorded.
S2: the environment simulation unit 6 places the sound source position at the smartphone or simulates the sound source position to another position according to the user requirement (input data), records the relative position information of the position and the smartphone, and reports the relative position information to the processor 5.
S3: the wireless earphone 1 of a user wearer moves in the environment, the movement can be basically regarded as moving in a plane due to the use scene, the relative distance between the 2 sound wave detection units 3 and the sound wave unit 4 and the included angle of the sound wave detection units are changed after the movement, and the processor 5 judges whether the distances of the 2 sound wave detection units 3 are close to or far away from the position of the simulated sound source respectively, so that the volume of the loudspeaker unit 2 is controlled to change accordingly.
When the distance is longer, the volume of the speaker unit 2 is controlled to be reduced, and when the distance is shorter, the volume of the speaker unit 2 is controlled to be increased.
At this moment, not only can simulate the effect when not wearing the earphone, provide more for the user and control the enjoyment, can also be used for looking for smart machine, the earphone looks for cell-phone or cell-phone looks for the cell-phone, and the warning tone is enlargied when approaching, and sound diminishes when keeping away from and so on.
By adopting the scheme, the sound wave detection unit and the sound wave unit which are respectively arranged on the wireless earphone and the scene/equipment are used for receiving and sending the sound wave positioning signal in real time to obtain the real-time position and dynamic change of the wireless earphone in the scene, and the processor is matched with the real-time sound source position simulated by the environment simulation unit to control the loudspeaker to adjust the volume, frequency, sound range and the like of the played audio in real time, so that the simulated environment sound which changes along with the action state of the user is provided for the user, the man-machine interaction is enhanced, and the user experience is improved.
In addition, with the increment of the number of the sound wave units and/or the sound wave detection units, the positioning information is from relative distance and relative angle to absolute positioning, the dynamic change and the motion trail of a user in a three-dimensional space are completely captured, even if the user shakes the head left and right, the sound heard by two ears is different, and the simulation precision of the simulation system is further improved.
In addition, when only wearing the earphone for use, the sound source position is positioned to the playing equipment through the environment simulation unit: the intelligent terminals such as mobile phones, sound equipment, televisions and the like or other virtual positions enable users to be matched with any equipment to achieve auditory environment simulation effect, and applicability is improved.
The above embodiments are merely illustrative of the technical ideas and features of the present invention, and the purpose thereof is to enable those skilled in the art to understand the contents of the present invention and implement the present invention, and not to limit the protection scope of the present invention. All equivalent changes and modifications made according to the spirit of the present invention should be covered within the protection scope of the present invention.
Claims (10)
1. An ambient sound emulation system based on wireless intelligent headphones, comprising:
the loudspeaker unit (2) is configured on the wireless earphone (1) and used for receiving electric signals, converting the electric signals into audio and playing the audio;
the sound wave unit (4) is used for sending out a sound wave positioning signal;
the sound wave detection unit (3) is used for detecting direct sound wave positioning signals sent by the sound wave unit (4);
the processor (5) is communicated with the loudspeaker unit (2), the sound wave unit (4) and the sound wave detection unit (3), processes the position information collected by the sound wave unit (4) and the sound wave detection unit (3), and is used for controlling the loudspeaker unit (2);
the environment simulation unit (6) is communicated with the processor (5) and is used for simulating the sound source position and providing sound source position information for the processor (5);
the sound wave unit (4) or the sound wave detection unit (3) is configured on the wireless earphone (1).
2. A wireless smart headset-based ambient sound emulation system as defined in claim 1, wherein the sound wave unit (4) is a speaker unit (2) of the wireless headset (1) when the sound wave unit (4) is disposed on the wireless headset (1).
3. A wireless smart headset-based ambient sound emulation system as defined in claim 1 wherein each wireless headset (1) unit has at least one sound wave unit (4) when the sound wave units (4) are configured on the wireless headset (1).
4. A wireless smart headset-based ambient sound emulation system according to claim 1, wherein when the sound wave detecting unit (3) is disposed on the wireless headset (1), the sound wave detecting unit (3) is a microphone module of the wireless headset (1).
5. A wireless smart headset-based ambient sound emulation system as defined in claim 1 wherein each wireless headset (1) unit has at least one sound wave sensing unit (3) when the sound wave sensing unit (3) is configured on the wireless headset (1).
6. A wireless smart headset-based ambient sound emulation system as defined in claim 1 wherein the location information includes the relative distance and relative angle of the sound wave unit (4) from the sound wave detection unit (3).
7. The wireless smart headset-based ambient sound emulation system of claim 1, wherein the sonic locating signal is an ultrasonic locating signal.
8. A wireless smart headset-based ambient sound emulation method according to any of claims 1-7, comprising the steps of:
after a user wears the wireless earphone (1), the sound wave unit (4) transmits a sound wave positioning signal, the sound wave detection unit (3) transmits data to the processor (5) after detecting the direct sound wave positioning signal, and the processor (5) calculates the relative position of the sound wave unit (4) and the sound wave detection unit (3) and determines the initial position of the wireless earphone (1);
the environment simulation unit (6) simulates the position of a sound source in a scene by matching the initial position of a user in an actual scene according to the relative position of a virtual sounding object and the user in virtual reality;
the processor (5) judges the audio which can be heard by the user in the actual scene according to the relative position of the sound source position and the initial position, and the processor (5) controls the loudspeaker unit (2) to emit the simulated audio.
9. The wireless intelligent earphone-based ambient sound simulation method according to claim 8, wherein the sound wave detection units (3) are installed on the wireless earphones (1), and the sound wave units (4) are distributed on the top of the scene in an array.
10. An intelligent interaction system of a headset and a multimedia device as claimed in claim 8, wherein the location of the sound source simulated by the environment simulation unit (6) is a freely set item when the wireless headset (1) is used alone.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110501438.8A CN113163293A (en) | 2021-05-08 | 2021-05-08 | Environment sound simulation system and method based on wireless intelligent earphone |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110501438.8A CN113163293A (en) | 2021-05-08 | 2021-05-08 | Environment sound simulation system and method based on wireless intelligent earphone |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113163293A true CN113163293A (en) | 2021-07-23 |
Family
ID=76873998
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110501438.8A Pending CN113163293A (en) | 2021-05-08 | 2021-05-08 | Environment sound simulation system and method based on wireless intelligent earphone |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113163293A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2023087629A1 (en) * | 2021-11-19 | 2023-05-25 | 北京小米移动软件有限公司 | Device control method and apparatus, device, and storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105101027A (en) * | 2014-05-08 | 2015-11-25 | 大北公司 | Real-time Control Of An Acoustic Environment |
CN109240496A (en) * | 2018-08-24 | 2019-01-18 | 中国传媒大学 | A kind of acousto-optic interactive system based on virtual reality |
CN112104928A (en) * | 2020-05-13 | 2020-12-18 | 苏州触达信息技术有限公司 | Intelligent sound box and method and system for controlling intelligent sound box |
CN112098942A (en) * | 2020-02-24 | 2020-12-18 | 苏州触达信息技术有限公司 | Intelligent device positioning method and intelligent device |
CN112612445A (en) * | 2020-12-28 | 2021-04-06 | 维沃移动通信有限公司 | Audio playing method and device |
-
2021
- 2021-05-08 CN CN202110501438.8A patent/CN113163293A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105101027A (en) * | 2014-05-08 | 2015-11-25 | 大北公司 | Real-time Control Of An Acoustic Environment |
CN109240496A (en) * | 2018-08-24 | 2019-01-18 | 中国传媒大学 | A kind of acousto-optic interactive system based on virtual reality |
CN112098942A (en) * | 2020-02-24 | 2020-12-18 | 苏州触达信息技术有限公司 | Intelligent device positioning method and intelligent device |
CN112104928A (en) * | 2020-05-13 | 2020-12-18 | 苏州触达信息技术有限公司 | Intelligent sound box and method and system for controlling intelligent sound box |
CN112612445A (en) * | 2020-12-28 | 2021-04-06 | 维沃移动通信有限公司 | Audio playing method and device |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2023087629A1 (en) * | 2021-11-19 | 2023-05-25 | 北京小米移动软件有限公司 | Device control method and apparatus, device, and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11042221B2 (en) | Methods, devices, and systems for displaying a user interface on a user and detecting touch gestures | |
CN107103801B (en) | Remote three-dimensional scene interactive teaching system and control method | |
US8160265B2 (en) | Method and apparatus for enhancing the generation of three-dimensional sound in headphone devices | |
US11715451B2 (en) | Acoustic devices | |
KR100436362B1 (en) | METHOD AND APPARATUS FOR CREATING A SPATIAL AUDIO ENVIRONMENT IN A VOICE CONFERENCE DEVICE | |
EP3253078B1 (en) | Wearable electronic device and virtual reality system | |
CN106303836B (en) | A kind of method and device adjusting played in stereo | |
US10257637B2 (en) | Shoulder-mounted robotic speakers | |
CN106664488A (en) | Driving parametric speakers as a function of tracked user location | |
US11467670B2 (en) | Methods, devices, and systems for displaying a user interface on a user and detecting touch gestures | |
CN106464995A (en) | Stand-alone multifunctional headphones for sports activities | |
CN106464996A (en) | Multifunctional headphone system for sports activities | |
CN111917489B (en) | Audio signal processing method and device and electronic equipment | |
CN105101027A (en) | Real-time Control Of An Acoustic Environment | |
CN102640517A (en) | Self steering directional loud speakers and a method of operation thereof | |
CN105997448B (en) | Frequency domain projection ultrasonic echo positioning navigation instrument | |
CN205584434U (en) | Smart headset | |
WO2017128481A1 (en) | Method of controlling bone conduction headphone, device and bone conduction headphone apparatus | |
CN114727212B (en) | Audio processing method and electronic equipment | |
WO2018048567A1 (en) | Assisted near-distance communication using binaural cues | |
US11991499B2 (en) | Hearing aid system comprising a database of acoustic transfer functions | |
CN113163293A (en) | Environment sound simulation system and method based on wireless intelligent earphone | |
US20240078991A1 (en) | Acoustic devices and methods for determining transfer functions thereof | |
CN106303787A (en) | The earphone system of reduction sound bearing true to nature | |
CN109088980A (en) | Sounding control method, device, electronic device and computer-readable medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210723 |
|
RJ01 | Rejection of invention patent application after publication |