CN113473305A - Audio processing device - Google Patents

Audio processing device Download PDF

Info

Publication number
CN113473305A
CN113473305A CN202010244320.7A CN202010244320A CN113473305A CN 113473305 A CN113473305 A CN 113473305A CN 202010244320 A CN202010244320 A CN 202010244320A CN 113473305 A CN113473305 A CN 113473305A
Authority
CN
China
Prior art keywords
audio
housing
shell
processing apparatus
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010244320.7A
Other languages
Chinese (zh)
Inventor
王英剑
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN202010244320.7A priority Critical patent/CN113473305A/en
Publication of CN113473305A publication Critical patent/CN113473305A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/20Arrangements for obtaining desired frequency or directional characteristics
    • H04R1/32Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only
    • H04R1/323Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only for loudspeakers
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/20Speech recognition techniques specially adapted for robustness in adverse environments, e.g. in noise, of stress induced speech
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/20Arrangements for obtaining desired frequency or directional characteristics
    • H04R1/32Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only
    • H04R1/326Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only for microphones
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Computational Linguistics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Otolaryngology (AREA)
  • Signal Processing (AREA)
  • Soundproofing, Sound Blocking, And Sound Damping (AREA)
  • Headphones And Earphones (AREA)

Abstract

The application discloses an audio processing device, which comprises a first shell, a second shell and a third shell, wherein a loudspeaker unit and a first audio acquisition module adjacent to the loudspeaker unit are arranged in the first shell; a second housing connected with the first housing; a second audio capture module disposed on an interior surface of the second housing and adjacent an end of the second housing. The audio acquisition module is arranged near the loudspeaker unit in the first shell provided with the loudspeaker unit, and the second audio acquisition module is arranged on the inner surface of the second shell connected with the first shell, so that environmental noise and voice audio of a user can be acquired from different positions, and the influence and the interference of noise on the voice audio can be eliminated based on the mixed audio containing different audio components acquired by the first audio acquisition module and the second audio acquisition module without complex voice recognition algorithm.

Description

Audio processing device
Technical Field
The present application relates to the field of audio processing technologies, and in particular, to an audio processing apparatus.
Background
In recent voice control technology, a voice instruction of a user is collected through a microphone of an earphone and an audio signal of the received voice instruction is processed by a processor of the earphone or a terminal connected to the earphone. However, since the microphone of the earphone or the microphone of the terminal is located at a certain distance from the mouth of the user for sending the voice command, especially the current wireless earphones are smaller and smaller, so that the main body of the wireless earphones is located near the ear, and the audio data sent forward by the mouth needs to be roundabout to be received by the microphone of the earphone or the terminal, so that a large amount of environmental sound, especially environmental noise, is inevitably mixed in the audio data received by the microphone.
Therefore, such audio data containing a large amount of environmental noise brings great difficulty to the subsequent audio recognition process, and in the prior art, the noise data is usually recognized from the received mixed audio data through a software algorithm, but due to the complexity of the environmental noise, the accuracy of such recognition scheme is low.
Disclosure of Invention
The embodiment of the application provides an audio processing device, which improves the recognition accuracy of voice data in acquired audio data under the influence of noise.
To achieve the above object, an embodiment of the present application provides an audio processing apparatus, including:
the audio acquisition device comprises a first shell, a second shell and a third shell, wherein a loudspeaker unit and a first audio acquisition module adjacent to the loudspeaker unit are arranged in the first shell;
a second housing connected with the first housing;
a second audio capture module disposed on an inner surface of the second housing and adjacent an end of the second housing.
The audio processing device that this application embodiment provided, through set up the audio acquisition module near speaker unit in the first casing that installs speaker unit and set up the second audio acquisition module on the internal surface of the second casing of being connected with first casing, thereby can follow different positions and gather environmental noise and user's pronunciation audio, thereby can be based on the mixed audio frequency that contains different audio compositions that first audio acquisition module and second audio acquisition module acquireed just need not complicated speech recognition algorithm and come the noise to eliminate the influence and the interference of pronunciation audio.
The foregoing description is only an overview of the technical solutions of the present application, and the present application can be implemented according to the content of the description in order to make the technical means of the present application more clearly understood, and the following detailed description of the present application is given in order to make the above and other objects, features, and advantages of the present application more clearly understandable.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the application. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
fig. 1 is a schematic view of an application scenario of an audio processing apparatus according to an embodiment of the present application;
FIG. 2 is a schematic diagram of an audio processing apparatus provided herein;
fig. 3 is an exploded schematic view of an audio processing apparatus provided in the present application.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
With the pace of life increasing, people's time in daily life also becomes fragmented, and people are increasingly used to perform multiple activities simultaneously to make full use of the fragmented time. For example, a working team may prefer to use headphones to listen to music, educational audio, or reuse a cell phone screen to view video content while riding in and out of work. Especially for users who are far away from commuting, the time spent on the road is also large, and thus it may be often necessary to use the headset of the mobile terminal to make a call on the road. However, when listening to audio or a call using a headset in an outdoor or public transportation such as a bus or a subway or the like, there is generally continuous ambient noise around, such as talking sounds of surrounding passengers, traveling noise of public transportation such as a car or a subway, or wind sounds while traveling on the road, and the like. These noises can seriously interfere with the user's effect of listening to the video. Particularly, when a user is in a call, since the audio emitted from the mouth of the user needs to pass through the path in the air from the mouth of the user to the microphone of the earphone, the audio signal received by the microphone of the earphone is a mixture of the voice and the noise audio of the user, so that the quality of the voice of the user collected by the earphone of the user is seriously deteriorated, and the other party in the call cannot hear the spoken content of the speaking party.
Particularly with the development of voice control technology in recent years, besides using a microphone of an earphone during a call, a user can issue a voice command from the mouth, and the voice command is collected by the microphone of the earphone and sent to a mobile terminal connected with the earphone to execute an operation represented by the voice command. In this case, for a mobile terminal executing a voice instruction, it is necessary that the audio of the voice instruction captured by the microphone of the headset has a higher quality to ensure sufficient clarity of the captured audio. However, as described above, since the user's headset is located at a distance from the user's mouth, especially, recent wireless headsets are enjoyed and used by more users due to their small size and portability. However, such wireless earphones are further away from the mouth of the user, for example, the main body of the earphone is usually near the ear, and therefore, the microphone of the earphone easily collects more environmental noise, thereby seriously affecting the user to use such earphones for conversation and give voice instructions. Especially, when a voice command is issued through the headset, a specific audio is often required to be spoken first to wake up the function, so that under the influence of environmental noise, the accuracy of the mobile terminal or the headset for recognizing the specific audio is more difficult to guarantee.
Fig. 1 is a schematic view of an application scenario of an audio processing apparatus according to an embodiment of the present application. As shown in fig. 1, a user may use the headset in various outdoor environments such as riding in a car, running, and the like. When a user wants to give a voice instruction to an earphone or a terminal to which the earphone is connected, the user can directly speak the desired voice instruction, such as "hello, tianmao sprite" with the earphone worn. The voice command is emitted from the user's mouth in the form of a sonic audio and is propagated through the air. At this time, since the user is in an outdoor environment, various environmental sounds may exist in the environment. As shown in fig. 1, there may be wind sound, engine sound of an automobile, rail impact sound of a subway, and the like around the user, and thus, sound wave audio of the user may be superimposed and mixed with such external sound, that is, sound waves of ambient sound, in the process of propagating to the microphone of the earphone, forming mixed audio to be received by the microphone. The audio received by the microphone may then be decoded and recognized by the headset or a terminal connected to the headset, thereby capturing the audio data, i.e., voice instructions, of the user contained therein.
However, in the case of the mixed ambient noise, it is very difficult to recognize the voice audio data from the mixed audio data by means of a software algorithm alone. In the embodiment of the present application as shown in fig. 2 and 3, the audio processing apparatus of the present application includes a plurality of audio capture modules, which may be disposed near the speaker unit and at an end position of the housing, respectively. Thus, in a scenario such as that shown in fig. 1, the speaker portion of the audio processing device, which is an earphone, is typically inserted into the ear canal of the user or is in close proximity to the ear of the user, so that the audio acquisition module located near the speaker can directly acquire signals conducted through the bones and muscles near the ear, and the signal located at the end can acquire voice audio data conducted through the air and ambient noise audio. Therefore, audio data with different audio components can be collected at different positions for comprehensive processing. For example, the audio acquisition module next to the ear of the user is inserted into the ear canal of the user or tightly attached to the ear of the user, so that the acquired audio data contains less environmental noise, but the acquired voice audio of the user has lower intensity and lower frequency. In contrast, a microphone located at the end of the housing of the headset relatively far from the user's mouth can capture voice audio data of higher intensity, but the intensity of the ambient noise mixed in is also higher. Such audio capture data having different audio components may be processed in a processor to obtain the user's voice audio data.
For example, in the case where a user carries out delivery of a physical distribution object using the audio processing device of the present application, the user drives a delivery vehicle for a long time, for example, a delivery takeout rider drives a two-wheeled vehicle such as a bicycle or a motorcycle, and therefore, both hands of the user need to be positioned on a handle of the vehicle at any time during driving to control the traveling direction, and therefore, in this case, if the rider wants to operate an order during traveling or make a call to a receiver, the rider needs to stop the vehicle and operate the mobile terminal, which seriously affects the delivery efficiency, or the rider may operate the mobile terminal using one hand during riding, which poses a risk to driving the vehicle. Therefore, in this case, the audio processing device according to the embodiment of the present application can be used to assist the rider to operate the mobile terminal using voice, so that both the transportation efficiency and the traveling safety can be achieved. For example, a rider may issue a voice wake-up instruction to the audio processing device of the present application for waking up the mobile terminal, for example, the rider may speak a specified voice wake-up instruction during riding, e.g., "hello, sprite", etc., the voice instruction may be conducted to the audio acquisition module of the audio processing device of the present application located near the speaker unit through bones and muscles near the ear and may be transmitted to the second audio acquisition module provided on the surface of the case together with the environmental noise through air propagation, so that the audio acquisition module near the speaker unit may recognize that the rider has spoken the voice wake-up instruction, and the environmental noise included in the mixed audio acquired by the second audio acquisition module, e.g., wind sound, may be excluded from the audio acquired by the audio acquisition module near the speaker unit, therefore, the voice wake-up instruction with enough audio intensity is sent to the mobile terminal or directly transmitted to a processing module built in the audio processing unit to execute the wake-up operation. The recognized voice command given by the rider can be transmitted to the mobile terminal or a built-in processing module in a similar manner to execute the specific command given by the rider.
Therefore, by providing the audio collection module near the speaker unit in the first housing in which the speaker unit is installed and providing the second audio collection module on the inner surface of the second housing connected to the first housing, it is possible to collect the environmental noise and the voice audio of the user from different positions, and it is possible to eliminate the influence and interference of the noise on the voice audio based on the mixed audio containing different audio components acquired by the first audio collection module and the second audio collection module without a complicated voice recognition algorithm.
The above embodiments are illustrations of technical principles and exemplary application frameworks of the embodiments of the present application, and specific technical solutions of the embodiments of the present application are further described in detail below through a plurality of embodiments.
Fig. 2 is a schematic diagram of an audio processing apparatus provided herein, and fig. 3 is an exploded schematic diagram of the audio processing apparatus provided herein;
as shown in fig. 2 and 3, the audio processing device of the present application may be implemented as various headphones, in-ear headphones, or other various audio processing devices having speakers and microphones. A neck-hung earphone is shown in fig. 2 and 3 as an example, but it is easily understood that the solution of the present application can also be applied to other devices having a microphone and a speaker to improve recognition accuracy for user voice data.
As shown in fig. 2 and 3, the audio processing apparatus 1 of the present application may include: a first housing 11, a second housing 12, a first audio capture module 15, a second audio capture module 13, and a speaker unit 14.
The speaker unit 14 and the first audio capture module 15 may be disposed within the first housing 11. In the embodiment of the present application, the audio processing apparatus 1 may be a neck-mounted earphone, and the first housing 11 may be a housing of an earphone portion inserted into an ear of a user. The speaker unit 14 may be a moving iron unit or other sound generating unit. A first audio acquisition module 15 may be placed within the first housing 11 as a built-in microphone of the audio processing device 1 and may be adjacent to the speaker unit 14. In this case, when the user uses the audio processing device 1 of the embodiment of the present application, the earphone portion is placed in the ear canal, and therefore, the first audio collecting module 15 can enter the ear canal of the user along with the first housing 11 of the earphone portion, so that when the user issues a voice command, audio data conducted through bones and/or muscles near the ear of the user can be collected.
In other embodiments of the present application, the audio processing device 1 may also be other forms of headphones, such as a headset or an ear-hook, and in this case, the first audio acquisition module 15 may be close to the user's ear when the user uses the device due to its proximity to the speaker unit 14, so that when the user issues a voice command, the first audio acquisition module 15 may still acquire audio data conducted through the bones and/or muscles near the user's ear.
Further, the audio processing apparatus 1 may include two headphone portions. For example, as shown in fig. 2 and 3, the audio processing apparatus 1 may include two first housings 11 on both sides, and thus may include a speaker unit 14 and a first audio capture module 15 in each first housing 11. Of course, in the embodiment of the present application, the first audio capture module 15 may be provided only in the first housing 11 on one side, or the speaker unit 14 may be provided only in the first housing 11 on one side, and only the first audio capture module 15 may be provided in the first housing 11 on the other side.
In the case of the neck-hung earphone shown in fig. 2 and 3, the second housing 12 is connected to the first housing 11 by the first connecting member 18, for example, the first connecting member 18 may be a cord structure, and both ends may be connected to connecting holes on the surfaces of the second housing 12 and the first housing 11, and accordingly, a data line may be placed in the cord, so that the cord may be used as both the connecting member and the data transmission member. In the case where the audio processing device 1 is an in-ear headphone, the second housing 12 may be directly joined together with the first housing 11.
Further, as shown in fig. 2 and 3, the audio processing device 1 may include two second housings 12, and the two second housings 12 may be connected by a connecting portion 17. The connection portion 17 may be hollow and tubular, so that power lines and data lines for supplying power to and transmitting data from the second audio collection module 13 in the second housing 12 and the speaker unit 14 and the first audio collection module 15 in the first housing 11 connected to the second housing 12 can be accommodated in the connection portion 17. Further, a power supply component of the audio processing device 1, such as a battery, may be accommodated in the connecting portion 17.
The second housing 12 may contain a second audio capture module 13 therein, and the second audio capture module 13 may be disposed at an inner surface of the second housing 12 and may be disposed at an end of the second housing 12 as shown in fig. 2 and 3. Thus, when the user uses the audio processing device 1, the first housing 11 may be in the ear canal or next to the ear of the user, while the second housing 12 may be outside the ear of the user, so that the first audio capturing module 15 may capture audio conducted through the bones and muscles of the user and the second audio capturing module 13 may capture audio data conducted through the air.
Furthermore, the second audio capturing module 13 may include a plurality of audio capturing units facing different directions. For example, as shown in fig. 2 and 3, the second audio collecting module 13 may include a first audio collecting unit 131 and a second audio collecting unit 132 disposed back and forth in a length direction of the second housing 12. For example, the first audio collecting unit 131 may be disposed in front of the second audio collecting unit 132, i.e., disposed closer to the end of the second housing 12 than the second audio collecting unit 132, and thus the second audio collecting unit 132 may be disposed closer to the connection of the second housing 12 and the connecting portion 17 than the first audio collecting unit 131. In this case, the first audio collecting unit 131 may have an audio collecting direction toward the end of the second housing 12, i.e., toward the front, so as to be able to be used to collect voice audio of the user and ambient noise, and the second audio collecting unit 132 may be disposed to have a collecting direction toward a direction other than the front. In other words, the second audio capturing unit 132 may be arranged to capture ambient noise. For example, the second audio collecting unit 132 may be disposed toward the top of the second housing 12 so as to collect the ambient noise around the head of the user. In this case, since the first audio capturing unit 131 is located in front of the second audio capturing unit 132 and also has a capturing direction toward the front, voice audio uttered in the mouth of the user can be captured more by the first audio capturing unit 131, and relatively more ambient noise can be captured by the second audio capturing unit 132.
Further, in the present embodiment, as shown in fig. 2 and 3, in the case where the audio processing device 1 is a neck headphone, the second housing 12 may be fixed above the first housing 11, so that the second audio capture module 13 may be disposed above the first audio capture module 15. With this structure, the second audio collecting module 13 can be closer to the mouth of the user to better collect voice audio conducted through the air by the user.
In addition, in the embodiment of the present application, openings may be provided on the first housing 11 and the second housing 12 at positions corresponding to the first audio capture module 15 and the second audio capture module 13 to facilitate audio entry, and the openings may be in a grid shape, so that the first audio capture module 15 and the second audio capture module 13 can be protected without affecting the entry of audio sound waves.
In the embodiment of the present application, the second housing 12 may further include two side plates 122 and 123 and a top plate 121 and a partition plate 124 between the two side plates 122 and 123. In this case, the strength of the second housing 12 may be reinforced by the bulkhead 124, and the second audio collection module 13 may be disposed on the bulkhead 124 accordingly. For example, the first audio collecting unit 131 and the second audio collecting unit 132 may be disposed on the partition 124, so that the partition 124 may serve as a support seat for the first audio collecting unit 131 and the second audio collecting unit 132.
For example, the second audio capture module 13 is disposed on the side of the partition 124 facing the top panel 121. In the case where the second audio capture module 13 includes the first audio capture unit 131 and the second audio capture unit 132, and the first audio capture unit 131 is disposed on the side of the partition 124 facing the top panel 121, and the second audio capture unit 132 is disposed on the side of the partition 124 facing the junction of the two side panels 122 and 123, i.e., facing the outside of the second housing 12.
In addition, according to the embodiment of the present application, the audio processing apparatus may further include a voice wake-up function switch component, which may be configured to enable or disable a voice wake-up function performed by a user through the audio processing apparatus of the present application. In other words, when the voice wake-up function is enabled, the user may perform the voice wake-up through the first audio capturing module 15 and the second audio capturing module 13 of the audio processing apparatus of the present application. Specifically, the first audio collection module 15 and the second audio collection module 13 may collect a voice command sent by a user, and accurately identify the voice command of the user by using environmental noise and voice audio of the user collected at different collection positions of the first audio collection module 15 and the second audio collection module 13, so as to send the environmental noise and the voice audio of the user to a mobile terminal connected to the audio processing device to wake up the mobile terminal or to wake up other function modules of the audio processing device itself. In contrast, when the voice wakeup module is turned off, the input of the specific voice instruction of the user may not be collected, or the wakeup operation may not be performed or the voice wakeup operation may not be sent to the mobile terminal when the specific voice instruction input by the user is identified and collected, so that the false triggering in a scene where the voice wakeup is not desired may be avoided.
In addition, according to the embodiment of the present application, the voice wake-up function switch component may further include a scene recognition unit, so that a scene where a user is located can be determined by receiving the environmental audio collected by the second audio collection module, and the voice wake-up function is automatically turned on or turned off according to a recognition result of the scene.
The audio processing device that this application embodiment provided, through set up the audio acquisition module near speaker unit in the first casing that installs speaker unit and set up the second audio acquisition module on the internal surface of the second casing of being connected with first casing, thereby can follow different positions and gather environmental noise and user's pronunciation audio, thereby can be based on the mixed audio frequency that contains different audio compositions that first audio acquisition module and second audio acquisition module acquireed just need not complicated speech recognition algorithm and come the noise to eliminate the influence and the interference of pronunciation audio.
Finally, it should be noted that: the above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present application.

Claims (12)

1. An audio processing apparatus comprising:
the audio acquisition device comprises a first shell, a second shell and a third shell, wherein a loudspeaker unit and a first audio acquisition module adjacent to the loudspeaker unit are arranged in the first shell;
a second housing connected with the first housing;
a second audio capture module disposed on an inner surface of the second housing and adjacent an end of the second housing.
2. The audio processing apparatus according to claim 1, further comprising:
and the two ends of the connecting part are respectively connected with the second shell, and at least one end of the two ends is connected with the first shell.
3. The audio processing apparatus according to claim 1, wherein the second audio capturing module comprises a first audio capturing unit and a second audio capturing unit, and
the first audio collecting unit is disposed in front of the second audio unit in a length direction of the second housing and has an audio collecting direction toward the front.
4. The audio processing apparatus according to claim 3, wherein the second audio capturing unit has an audio capturing direction toward a top of the second housing.
5. The audio processing apparatus according to claim 3, wherein the second audio capturing module is disposed above the first audio capturing module.
6. The audio processing apparatus according to claim 3, wherein the first audio capturing module is disposed between the first audio capturing unit and the second audio capturing unit.
7. The audio processing apparatus according to claim 1, wherein the first audio capturing module is provided on a side of the audio processing apparatus close to a user, and the speaker unit is provided on a side far from the user.
8. The audio processing apparatus according to claim 1, wherein openings are provided on surfaces of the first and second housings at positions corresponding to the first and second audio capture modules.
9. The audio processing device according to claim 1, wherein the second housing comprises two side plates and a top plate and a partition plate between the two side plates.
10. The audio processing apparatus according to claim 9, wherein the second audio capture module is disposed on a side of the partition facing the top panel.
11. The audio processing apparatus according to claim 9, wherein the second audio capturing module includes a first audio capturing unit and a second audio capturing unit, and the first audio capturing unit is provided on a side of the partition plate facing the top plate, and the second audio capturing unit is provided on a side of the partition plate facing a joint portion of the two side plates.
12. The audio processing apparatus according to claim 1, further comprising: and two ends of the first connecting piece are respectively fixed in the connecting holes on the first shell and the second shell.
CN202010244320.7A 2020-03-31 2020-03-31 Audio processing device Pending CN113473305A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010244320.7A CN113473305A (en) 2020-03-31 2020-03-31 Audio processing device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010244320.7A CN113473305A (en) 2020-03-31 2020-03-31 Audio processing device

Publications (1)

Publication Number Publication Date
CN113473305A true CN113473305A (en) 2021-10-01

Family

ID=77865474

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010244320.7A Pending CN113473305A (en) 2020-03-31 2020-03-31 Audio processing device

Country Status (1)

Country Link
CN (1) CN113473305A (en)

Similar Documents

Publication Publication Date Title
CN202679440U (en) Communication apparatus with bone conduction function
US20170345408A1 (en) Active Noise Reduction Headset Device with Hearing Aid Features
EP2314077B1 (en) Wearable headset with self-contained vocal feedback and vocal command
CN101277331B (en) Sound reproducing device and sound reproduction method
CN106170108B (en) Earphone device with decibel reminding mode
KR100694767B1 (en) Earset sensible of outer warning note and method of sensing outer warning for earset user
JP2007201887A (en) Oscillation/echo canceller system
CN208079336U (en) Multifunctional headphone system
CN110636402A (en) Earphone device with local call condition confirmation mode
CN101414839A (en) Portable electronic device and noise elimination method therefore
CN111491228A (en) Noise reduction earphone and control method thereof
US20130016853A1 (en) Audio apparatus capable of noise suppression and noise-suppressed mobile phone
CN105208477B (en) A kind of double down In-Ear Headphones of making an uproar
CN201438738U (en) Active noise cancellation system
CN214381328U (en) Earphone of making an uproar falls in conversation
CN211982117U (en) Audio processing device
CN101360155A (en) Active noise silencing control system
CN113473305A (en) Audio processing device
CN111510807A (en) Earphone and voice signal acquisition method
CN103442118A (en) Bluetooth car hands-free phone system
CN203358474U (en) Vehicle-mounted Bluetooth hand-free sound device and vehicle
CN106101898B (en) Noise cancelling headphone and its noise-reduction method
CN202931568U (en) Automobile sun visor communication apparatus
CN212970064U (en) Earphone of conversation anti-wind noise based on bone conduction technology
JP6813169B2 (en) Voice suppression device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination