CN115175045A - Wearable audio device and audio processing method - Google Patents

Wearable audio device and audio processing method Download PDF

Info

Publication number
CN115175045A
CN115175045A CN202210770274.3A CN202210770274A CN115175045A CN 115175045 A CN115175045 A CN 115175045A CN 202210770274 A CN202210770274 A CN 202210770274A CN 115175045 A CN115175045 A CN 115175045A
Authority
CN
China
Prior art keywords
audio
user
distance
ear
speaker
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210770274.3A
Other languages
Chinese (zh)
Inventor
吴叶富
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anker Innovations Co Ltd
Original Assignee
Anker Innovations Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anker Innovations Co Ltd filed Critical Anker Innovations Co Ltd
Priority to CN202210770274.3A priority Critical patent/CN115175045A/en
Publication of CN115175045A publication Critical patent/CN115175045A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones
    • H04R1/1091Details not provided for in groups H04R1/1008 - H04R1/1083
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2430/00Signal processing covered by H04R, not provided for in its groups

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Stereophonic System (AREA)

Abstract

A wearable audio device and an audio processing method, the wearable audio device including a processor, a distance sensor, an infrared sensor, and a speaker, wherein: the processor is used for determining the tightness state of the wearable audio device worn by the user based on the output data of the distance sensor; the processor is further configured to determine a distance between the speaker and the ear of the user based on the tightness state and the output data of the infrared sensor; the processor is also configured to perform audio calculations and audio outputs based on the distance. The wearable audio equipment can provide self-adaptive audio processing according to different scenes, so that a user can meet the requirement of tone quality in various scenes; in addition, due to the fact that the audio is processed in a self-adaptive mode by combining the multi-channel sensor data, high-precision scene detection results can be obtained, and high-adaptability audio output is provided.

Description

Wearable audio device and audio processing method
Technical Field
The present application relates to the field of audio processing technologies, and in particular, to a wearable audio device and an audio processing method.
Background
At present, wireless earphones are increasingly used in people's daily life as a wearable audio device. In addition to wireless headphones, wireless audio smart glasses also belong to a wearable audio device. Compared with wireless earphones, wearing the wireless audio smart glasses enables a user to pay attention to surrounding conditions when listening to audio and can better protect ears.
Present wearable audio equipment such as wireless earphone, wireless audio frequency intelligence glasses all outputs fixed audio frequency to different scenes, such as the user head type of difference, the user's ear canal of difference, the different circumstances such as whether the position is worn to the difference, user motion, can't be directed against different scenes output different audio frequencies to guarantee to satisfy the tone quality demand of different scenes.
Disclosure of Invention
The application provides wearable audio equipment and an audio processing method, and aims to solve the problem that the existing wearable audio equipment such as wireless earphones and wireless audio intelligent glasses cannot meet the requirements of different scenes on tone quality.
According to an aspect of the present application, there is provided a wearable audio device including a processor, a distance sensor, an infrared sensor, and a speaker, wherein: the processor is used for determining the tightness state of the wearable audio device worn by a user based on the output data of the distance sensor; the processor is further configured to determine a distance between the speaker and the user's ear based on the tightness state and output data of the infrared sensor; the processor is further configured to perform audio calculations and audio outputs based on the distance such that the speaker outputs a best sound quality at the distance.
In one embodiment of the present application, the wearable audio device further comprises an acceleration sensor, and the processor is further configured to: and determining whether the user is in a motion state based on the output data of the acceleration sensor, and performing audio calculation and audio output based on the distance when the user is determined to be in a non-motion state.
In one embodiment of the present application, the processor is further configured to: when the user is determined to be in the motion state, a preset audio curve is obtained, the preset audio curve is adjusted based on the distance between the loudspeaker and the ear of the user, and audio calculation and audio output are carried out based on the adjusted audio curve.
In one embodiment of the present application, the processor adjusting the preset audio curve based on a distance between the speaker and the ear of the user comprises: adjusting the amplitude of a preset frequency band or a preset frequency point in the preset audio curve based on the distance between the loudspeaker and the ear of the user.
In one embodiment of the present application, the processor adjusts the preset audio curve based on a distance between the speaker and the ear of the user, including: determining a frequency band or a frequency point to be adjusted based on a distance between the speaker and the user's ear, and adjusting an amplitude of the frequency band or the frequency point in the preset audio curve based on the distance between the speaker and the user's ear.
According to another aspect of the present application, there is provided an audio processing method, the method including: determining a tightness state of the wearable audio device worn by a user based on output data of a distance sensor in the wearable audio device; determining a distance between a speaker of the wearable audio device and an ear of the user based on the tightness state and output data of an infrared sensor in the wearable audio device; and performing audio calculation and audio output based on the distance.
In one embodiment of the present application, the method further comprises: and determining whether the user is in a motion state based on output data of an acceleration sensor in the wearable audio device, and performing audio calculation and audio output based on the distance when the user is determined to be in a non-motion state.
In one embodiment of the present application, the method further comprises: when the user is determined to be in the motion state, a preset audio curve is obtained, the preset audio curve is adjusted based on the distance between the loudspeaker and the ear of the user, and audio calculation and audio output are carried out based on the adjusted audio curve.
In one embodiment of the present application, the adjusting the preset audio curve based on the distance between the speaker and the ear of the user includes: adjusting the amplitude of a preset frequency band or a preset frequency point in the preset audio curve based on the distance between the loudspeaker and the ear of the user.
In an embodiment of the application, the adjusting the preset audio curve based on the distance between the speaker and the ear of the user includes: determining a frequency band or a frequency point to be adjusted based on a distance between the speaker and the ear of the user, and adjusting an amplitude of the frequency band or the frequency point in the preset audio curve based on the distance between the speaker and the ear of the user.
According to the wearable audio equipment and the audio processing method, adaptive audio processing can be provided according to different scenes, so that a user can meet the requirements of sound quality in various scenes; in addition, due to the fact that the audio is processed in a self-adaptive mode by combining the multi-channel sensor data, high-precision scene detection results can be obtained, and high-adaptability audio output is provided.
Drawings
The above and other objects, features and advantages of the present application will become more apparent by describing in more detail embodiments of the present application with reference to the attached drawings. The accompanying drawings are included to provide a further understanding of the embodiments of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the principles of the application. In the drawings, like reference numbers generally indicate like parts or steps.
Fig. 1 shows a schematic block diagram of a wearable audio device according to an embodiment of the present application.
Fig. 2 shows a schematic block diagram of a wearable audio device according to another embodiment of the present application.
Fig. 3 shows an exemplary workflow diagram of a wearable audio device according to an embodiment of the present application.
Fig. 4 shows a schematic diagram of communication interaction between a processor and a sensor in the wearable audio device according to the embodiment of the application.
Fig. 5 shows a schematic flow diagram of an audio processing method according to an embodiment of the application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, exemplary embodiments according to the present application will be described in detail below with reference to the accompanying drawings. It should be understood that the described embodiments are only some embodiments of the present application and not all embodiments of the present application, and that the present application is not limited by the example embodiments described herein. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the application described in the application without inventive step, shall fall within the scope of protection of the application.
Fig. 1 shows an exemplary structural diagram of a wearable audio device 100 according to an embodiment of the present application. As shown in fig. 1, the wearable audio device 100 includes a processor 120, a distance sensor 130, an infrared sensor 140, and a speaker 150. Wherein the processor 120 is configured to determine a tightness state of the wearable audio device 100 worn by the user based on the output data of the distance sensor 130; the processor 120 is further configured to determine a distance between the speaker 150 and the user's ear based on the tightness and the output data of the infrared sensor 140; the processor 120 is also configured to perform audio calculations and audio outputs based on the distance such that the speaker 150 outputs the best sound quality at the distance. Here, the best sound quality is understood to be the best audible sensation that the user can achieve at that distance. In addition, the wearable audio device 100 may also include a power module 110 to power the wearable audio device 100. Here, the power supply module 110 is not essential, and the wearable audio device 100 may be powered by an external power supply.
In the embodiment of the present application, the wearable audio device 100 includes more than one sensor, namely an infrared sensor 140 (IR-sensor) and a distance sensor 130 (P-sensor), respectively, where the infrared sensor 140 is used for non-contact measurement, and the distance sensor 130 is used for contact measurement, the processor 120 determines the tightness of the wearable audio device 100 worn by the user according to the output data of the distance sensor 130, then determines the distance between the speaker 150 and the ear of the user by combining the output data of the infrared sensor 140, and finally performs audio calculation and audio output according to the distance between the speaker 150 and the ear of the user, so that the speaker 150 can output the best sound quality to the user at the distance, thereby outputting different audio frequencies for different scenes, and ensuring that the sound quality requirements of different scenes are satisfied.
For example, when the wearable audio device 100 is a wireless earphone, a situation may occur that the earphone worn in the ear of the user is displaced along with, for example, walking or some action of the user, for example, after the earphone is displaced, the speaker 150 is farther away from the ear canal than the previous time (i.e., the wearing state is looser than the previous time), at this time, the distance sensor 130 of the wearable audio device 100 can detect the change of the skin contact condition of the earphone with the user, and in combination with the distance detection of the infrared sensor 140, the processor 120 can accurately calculate the current distance of the speaker 150 from the ear, and perform audio calculation and audio output in real time according to the distance (i.e., adjust some parameters of the audio in real time relative to the previous time, such as adjusting the volume, adjusting the equalizer parameters, adjusting the filter parameters, and the like because the ear canal is farther away), so that the user can obtain the best sound quality experience in different scenes at various times. In other scenarios, such as different states of wearing the earphones due to different shapes of the ear canals of the users, the processor 120 may also perform audio calculation and audio output in a targeted manner according to the detection data of the multiple sensors, so as to meet the requirements of the users on sound quality.
Similarly, when the wearable audio device 100 is wireless audio smart glasses, it may happen that the temple of the wireless audio smart glasses worn by the user on the ear of the user is displaced along with a motion, such as walking or some action of the user, for example, after the temple is displaced, so that the speaker 150 disposed on the temple is distant from the ear canal from the previous time (which may be understood as that the wearing state is looser than the previous time or that the wearing position is changed), and then the distance sensor 130 of the wearable audio device 100 can detect the change of the skin contact condition of the temple and the user, and in combination with the distance detection of the infrared sensor 140, the processor 120 can accurately calculate the current distance of the speaker 150 from the ear, and perform audio calculation and audio output in real time according to the distance (i.e., adjust some parameters of the audio in real time relative to the previous time, such as adjusting the volume, adjusting equalizer parameters, adjusting filter parameters, and the like because of the temple is distant from the ear canal), so that the user can obtain the best quality experience under various scenes at various times. In other scenarios, such as different head types of users and different wearing states of glasses, the processor 120 may also perform audio calculation and audio output according to the detection data of the multi-channel sensor in a targeted manner, so as to meet the requirement of the user on sound quality.
Therefore, the wearable audio device 100 according to the embodiment of the application can obtain the distance from the speaker 150 to the ear of the user by using the infrared sensor 140 and the distance sensor 130, and perform targeted audio calculation and audio output respectively for different wearing states of the user, so that the user can meet the requirement of sound quality in various scenes; in addition, due to the fact that the audio is processed in a self-adaptive mode by combining the multi-channel sensor data, high-precision scene detection results can be obtained, and high-adaptability audio output is provided.
In other embodiments of the present application, the wearable audio device 100 may not include the distance sensor 130, and only include the infrared sensor 140. In this embodiment, the wearable audio device 100 can determine the distance between the speaker 150 and the ear of the user from the output data of the infrared sensor 140, and perform audio calculation and audio output based on the distance. Compared with the previous embodiment, the wearable audio device 100 in this embodiment can also provide adaptive audio processing according to different scenes, so that the user can meet the sound quality requirement in various scenes, but since the audio is adaptively processed based on the detection data of only one sensor, the detection accuracy of the scene is slightly lower.
The interaction of the processor 120 with the infrared sensor 140 and the distance sensor 130 is described below. The processor 120 is, for example, a bluetooth chip (IC), a WIFI chip, another microcontroller, or the like.
In one embodiment of the present application, a correction initial value of the infrared sensor 140 may be stored in the processor 120, and after the wearable audio device 100 is powered on, the processor 120 may write the correction initial value into the infrared sensor 140, and when the detection data of the infrared sensor 140 reaches a threshold determined based on the correction initial value (i.e., the movement of the temple of the wireless headset or wireless audio smart glasses reaches the threshold), the infrared sensor 140 outputs an interrupt signal, and the processor 120 may read data from the infrared sensor 140 based on the interrupt signal for calculating the distance between the speaker 150 and the ear of the user. In this embodiment, the processor 120 does not need to always read data from the infrared sensor 140, but reads data from the infrared sensor 140 to calculate the distance between the speaker 150 and the ear of the user after receiving the interrupt signal of the infrared sensor 140.
In another embodiment of the present application, the correction initial value of the infrared sensor 140 may be stored in the processor 120, and when the wearable audio device 100 is powered on, the processor 120 may write the correction initial value into the infrared sensor 140, and when a change in data is detected by a sensor other than the infrared sensor 140 in the wearable audio device 100, the processor 120 may read data from the infrared sensor 140 for calculating the distance between the speaker 150 and the ear of the user. In this embodiment, the wearable audio device 100 includes other sensors, such as the distance sensor 130, an acceleration sensor to be described later below, and the like, in addition to the infrared sensor 140. Based on this, when the detection data of other sensors changes (for example, the processor 120 detects that the contact condition of the wearable audio device 100 with the skin of the user changes according to the detection data of the distance sensor 130, or the processor 120, which will be described later, detects that the user is moving according to the detection data of the acceleration sensor, etc.), the processor 120 reads data from the infrared sensor 140 in real time to calculate the distance between the speaker 150 and the ear in combination with the data of the infrared sensor 140 for audio calculation and audio output.
In both embodiments, the infrared sensor 140 may include three calibration initial values of three channels, that is, three calibration initial values, which may be obtained by calibrating the infrared sensor 140 at the time of factory production, and which may be stored in a FLASH memory (FLASH) of the processor 120 (e.g., a bluetooth chip). When the wearable audio device 100 is powered on, the three calibration initial values can be read from the FLASH and written to the infrared sensor 140. When the infrared sensor 140 issues an interrupt signal, the processor 120 may read data from the infrared sensor 140 via the I2C bus.
In the embodiment of the present application, the initial correction value of the distance sensor 130 may be stored in the processor 120, when the wearable audio device 100 is powered on, the processor 120 writes the initial correction value into the distance sensor 130, and the processor 120 determines the tightness state of the wearable audio device 100 worn by the user according to the magnitude of the difference between the detection data of the distance sensor 130 and the initial correction value. Described below in connection with specific examples.
In one example, the P-Sensor is calibrated during factory production, and the initial values of the four channels obtained are Init _ CH0, init _ CH1, init _ CH2 and Init _ CH3, which are stored in the FLASH of the bluetooth chip respectively. After the wearable audio device 100 is powered on, calibration values of the four channels are read out from the FLASH as correction initial values (which may be simply referred to as initial values). When the user wears the wearable audio device 100, the four channels of the P-Sensor sense the variation values respectively, and the tighter the contact with the touch areas of the channels, the larger the variation value of the P-Sensor will be. At this time, the current Value of each channel is read out from the register of the P-Sensor through I2C communication by the Bluetooth chip and is respectively marked as CH0_ Value, CH1_ Value, CH2_ Value and CH3_ Value. Then, the difference between the current value of each channel and its respective initial value is calculated and respectively marked as Delta _ CH0, delta _ CH1, delta _ CH2 and Delta _ CH3. In one example, 10 consecutive readings are taken every 50ms interval, and the above difference is calculated for each channel 10 times. After the 10 read processes are completed, the 10 Delat _ CH differences for each channel are averaged. In one example, the maximum Value may be removed, the minimum Value may be removed, and the remaining 8 difference values may be averaged to obtain the actual average difference Value Delta _ CH _ Value of the current channel. And then comparing the Delta _ CH _ Value of each channel with a set threshold Value (threshold Value) to obtain the tightness of the current channel. In one example, 3 threshold values are included, which are 200, 600, and 1000, respectively. If the Delta _ CH _ Value of the current channel is greater than 200 and less than or equal to 600, the tightness state of the current channel can be determined to be loose; if Delta _ CH _ Value of the current channel is greater than 600 and less than or equal to 1000, the tightness status of the current channel may be determined to be medium; if the Delta _ CH _ Value for the current channel is greater than 1000, then the slack status of the current channel may be determined to be tight. The tightness of the four channels can be determined, and then the distance between the speaker 150 and the ear is calculated according to the physical contact sequence of the actual channels and the detection data of the IR-Sensor for audio calculation and audio output, so that the best sound quality can be output at each moment.
In an embodiment of the present application, the processor 120 may be further configured to obtain user input, and determine whether to perform the aforementioned distance calculation and audio calculation based on the user input. In this embodiment, an option is provided for the user, who may or may not select to turn on the adaptive audio processing function of the wearable audio device 100. When this function is turned on, the processor 120 performs the aforementioned distance calculation and audio calculation based on the data of the infrared sensor 140 and/or the distance sensor 130. Conversely, when the function is not turned on, the infrared sensor 140 and/or the distance sensor 130 may not operate, or the processor 120 does not need to read data therefrom but directly outputs fixed audio. In one example, user input keys can be included on the wearable audio device 100, and the processor 120 receives user input based on the user input keys. In another example, the wearable audio device 100 includes companion application software with which the processor 120 obtains user input via communication.
Based on the above description, the wearable audio device 100 according to the embodiment of the present application can obtain the distance from the speaker to the ear of the user by using the infrared sensor and the distance sensor, and perform targeted audio calculation and audio output respectively for different wearing states of the user, so that the user can meet the requirements of sound quality in various scenes; in addition, due to the fact that the audio is processed in a self-adaptive mode through the combination of the multi-channel sensor data, high-precision scene detection results can be obtained, and therefore high-adaptability audio output is provided.
A wearable audio device 200 according to another embodiment of the present application is described below with reference to fig. 2. As shown in fig. 2, the wearable audio device 200 includes a power supply module 210, a processor 220, a distance sensor 230, an infrared sensor 240, a speaker 250, and an acceleration sensor 260. The power supply module 210 is configured to supply power to the wearable audio device; the processor 220 is configured to determine a tightness state of the wearable audio device worn by the user based on the output data of the distance sensor 230; the processor 220 is also configured to determine a distance between the speaker 250 and the user's ear based on the tightness and the output data of the infrared sensor 240; the processor 220 is further configured to perform audio calculation and audio output based on the distance, so that the speaker 250 outputs the best sound quality at the distance; the processor 220 is also used to determine whether the user is in a motion state based on the output data of the acceleration sensor 260, and perform audio calculation and audio output according to the motion state, so that the speaker 250 outputs the best sound quality in the motion state of the user. Here, the power supply module 210 is not essential, and the wearable audio device 200 may also be powered by an external power supply.
The wearable audio device 200 according to the embodiment of the present application is substantially similar to the wearable audio device 100 according to the embodiment of the present application described above, except that the wearable audio device 200 according to the embodiment of the present application further includes an acceleration sensor 260 (G-sensor), and the acceleration sensor 260 can be used to measure and identify whether the user is moving. When the processor 220 determines that the user is not in motion based on the output data of the acceleration sensor 260, audio calculation and audio output may be performed based on the distance between the speaker 250 and the ear of the user, so that the speaker 250 outputs the best sound quality at the distance, as described above. When the processor 220 determines that the user is in a motion state according to the output data of the acceleration sensor 260, a preset audio curve (which may be dedicated to an audio curve in the motion state of the user, and whose abscissa may be frequency and ordinate may be amplitude) may be obtained, the preset audio curve may be adjusted based on the distance between the speaker 250 and the ear of the user, and audio calculation and audio output may be performed based on the adjusted audio curve, so that the speaker 250 outputs the best sound quality according to the distance from the ear in the motion state of the user. Therefore, the wearable audio device 200 according to the embodiment of the present application can also meet the requirement of sound quality in the case of user motion. For the sake of brevity, portions of the wearable audio device 200 similar to those of the wearable audio device 100 will not be described here, and only portions different therebetween will be described.
In one embodiment, the processor 220 adjusts the preset audio curve based on the distance between the speaker 250 and the ear of the user, which may include: the amplitude of the preset frequency band or the preset frequency point in the preset audio curve is adjusted based on the distance between the speaker 250 and the ear of the user. In this embodiment, in the user's motion state, the adjustment of the preset audio curve corresponding to the motion state by the processor 220 is performed for a fixed frequency segment or a fixed frequency point, and the distance between the speaker 250 and the user's ear is used to determine how to adjust the amplitude of the fixed frequency segment or the fixed frequency point. For example, in the user motion state, the processor 220 may adjust the amplitude of the 20Hz to 20KHz frequency segment in the preset audio curve: when the distance between the speaker 250 and the ear becomes large, the amplitude of the frequency band of 20Hz to 20KHz can be increased; as the distance between the speaker 250 and the ear becomes smaller, the amplitude of the frequency band of 20Hz to 20KHz can be reduced. As another example, in the user motion state, the processor 220 may adjust the amplitude of a fixed frequency point such as 100Hz, 500Hz, 1000Hz, 1500Hz, 2000Hz, etc. in the preset audio curve: as the distance between the speaker 250 and the ear becomes larger, the amplitude of these frequency points can be increased; the amplitude of these frequency points can be adjusted smaller as the distance between the speaker 250 and the ear becomes smaller. The above frequency bands and frequency points are examples, and in an actual scene, the frequency bands and the frequency points may be related to the listening range of human ears, and the type and content of audio listened to by a user. In general, this embodiment enables the user to obtain a good sound quality effect in a moving state.
In another embodiment, the processor 220 adjusts the preset audio curve based on the distance between the speaker 250 and the ear of the user, which may include: the frequency band or frequency point to be adjusted is determined based on the distance between the speaker 250 and the user's ear, and the amplitude of the frequency band or frequency point in the preset audio curve is adjusted based on the distance between the speaker 250 and the user's ear. In this embodiment, in the user motion state, the processor 220 not only determines the frequency segment or frequency point to be adjusted in the preset audio curve according to the distance between the speaker 250 and the ear of the user, but also determines how to adjust the amplitude of the frequency segment or frequency point according to the distance between the speaker 250 and the ear of the user. For example, when the distance between the speaker 250 and the ear is within a first distance range, the amplitude of the first frequency segment in the preset frequency curve is adjusted, and the adjustment magnitude is positively correlated to the distance; when the distance between the loudspeaker 250 and the ear is in a second distance range, adjusting the amplitude of a second frequency segment in the preset frequency curve; the distance value in the second distance range may be greater than the distance value in the first distance range, the frequency band corresponding to the second frequency segment may be wider than the frequency band corresponding to the first frequency segment, and the adjustment magnitude is positively correlated to the distance. This example is an example of adjusting a frequency bin, and an example of adjusting a frequency point is similar, and is not described here again. In general, this embodiment enables a user to obtain a more accurate adaptive adjustment effect of the sound quality in a moving state.
In another embodiment, the processor 220 may not determine the distance between the speaker 250 and the ear of the user any longer after determining that the user is in the motion state, and perform the audio calculation and the audio output directly according to the preset audio curve. In this embodiment, once it is determined that the user is in motion, it may not be necessary to determine the distance between the speaker 250 and the user's ear in conjunction with the output data of the distance sensor 230 and the infrared sensor 240, since the distance may change over time during the user's motion, making audio calculations more complex. Therefore, once it is determined that the user is in a motion state, it is sufficient to perform audio calculation and audio output only by using the preset audio curve of the wearable audio device 200 in the motion state. The flowchart shown in fig. 3 is the scenario of this example. In fig. 3, the wearable audio device is taken as an example of a wireless headset, and the real-time wearing position shown in fig. 3 is the distance between the speaker and the ear determined as described above, which are corresponding to each other.
The interaction of the processor 220 with the acceleration sensor 260 is described below.
In the embodiment of the present application, the initial correction value of the acceleration sensor 260 may be stored in the processor 220, when the wearable audio device is powered on, the processor 220 writes the initial correction value into the acceleration sensor 260, and the processor 220 determines whether the user is in a motion state according to the magnitude of the difference between the detection data of the acceleration sensor 260 and the initial correction value. Described below in connection with specific examples.
In one example, the wearable audio device is first worn and then powered up to initialize the processor 220 (such as a bluetooth chip) and the G-sensor, respectively, during factory production. After initialization is completed, the Bluetooth chip reads current XYZ-axis data from the G-Sensor through I2C communication, for example, continuously reads 20 groups of data, performs sliding filtering, and uses the data obtained by filtering the value of the G-Sensor in a wearing static state as a calibration value for calibrating the XYZ axis. And storing the calibration value of the G-Sensor as a correction initial value in the Bluetooth chip. The three initial values are read from the FLASH every time the Bluetooth chip is started to work and are used as the correction initial value (initial value for short) of the G-Sensor.
When the change value of the G-Sensor in the three directions of XYZ exceeds a certain amount, the G-Sensor outputs an interrupt signal, the Bluetooth chip reads the current values of the G-Sensor in the three directions of XYZ through I2C communication after receiving the interrupt signal, and then calculates the difference value between the current values of the three directions and the respective initial value thereof to obtain the acceleration change amount. In one example, the acceleration change is acquired in real time over a period of time, for example, every 50ms, approximately 20 acquisitions. The collected data can be stored in a software cache of the Bluetooth chip, the difference calculation is the data comparison of the current value and the initial value, and the comparison result comprises the increase and decrease and the absolute value of the difference. Then, the Bluetooth chip can analyze the collected data, if the data changes in the direction of increasing, the maximum value of the change can be determined, and the wave crest is recorded; if the data is changed towards the decreasing direction, the maximum value of the change can be determined, and the wave trough is recorded; and, calculating the time base of the peak and trough data. After determining the time between the peak and the trough of the data change and the peak and trough value, the processor 220 may determine whether the user is in a motion state according to a preset threshold of the peak and the trough and the time, for audio calculation and audio output.
In an embodiment of the present application, the processor 220 may be further configured to obtain a user input, and determine a tuning parameter configuration (such as the preset audio curve described above) corresponding to the motion pattern based on the user input. In this embodiment, the user may preset the tuning parameter configuration corresponding to the motion mode, such as through a user input key included in the wearable audio device, or through a companion Application (APP) of the wearable audio device. The user can customize the setting and can also select from the existing setting options. Based on this, when the processor 220 determines that the user is in motion, the processor 220 may perform audio calculations based on the tuning parameter configuration that the user has set, without the need for autonomous adaptive calculations. In addition to the sports mode, the user may also select different audio-effect adaptive audio modes such as acoustic, low-frequency enhancement, 3D audio, surround, and the like.
Fig. 4 shows connections, interactions, inputs, outputs, and the like among the components of the wearable audio device according to the embodiment of the present application. Wherein VDD is power supply, GND is ground; CH0 to CH3 are the four channel inputs of the P-sensor; bluetooth chip SOC is one example of processor 220; OUT1 to OUT4 are audio outputs corresponding to the case where four speakers 250 are included, also as an example; the APP is the companion application of the wearable audio device, as described above. The structure and operation of the wearable audio device according to the embodiment of the present application can be better understood with reference to fig. 4 in conjunction with the foregoing description, and will not be described in detail here. In addition, the example shown in fig. 4 can be applied to an application scenario in which the wearable audio device is wireless audio smart glasses, and in the scenario, two P-sensors can be respectively arranged on the left and right glasses legs of the wireless audio smart glasses so as to more accurately determine the tightness state of the glasses worn by the user. Of course, this is merely exemplary, and in other scenarios, one P-sensor may be provided on each of the left and right temples of the wireless audio smart glasses. In addition, for the application scene that the wearable audio equipment is the wireless earphone, one or more P-sensors can be respectively arranged on the left earphone and the right earphone.
Based on the above description, the wearable audio device according to the embodiment of the application can obtain the distance from the speaker to the ear of the user by using the infrared sensor and the distance sensor, can also obtain the motion state of the user by using the acceleration sensor, and performs targeted audio calculation and audio output respectively for different wearing states of the user, so that adaptive audio processing is provided according to different scenes, and the user can meet the requirements of sound quality in various scenes; in addition, due to the fact that the audio is processed in a self-adaptive mode by combining the multi-channel sensor data, high-precision scene detection results can be obtained, and high-adaptability audio output is provided.
An audio processing method 500 provided according to another aspect of the present application is described below in conjunction with fig. 5. As shown in fig. 5, the audio processing method 500 may include the steps of:
in step S510, the tightness state of the wearable audio device worn by the user is determined based on the output data of the distance sensor in the wearable audio device.
In step S520, a distance between the speaker of the wearable audio device and the ear of the user is determined based on the tightness state and the output data of the infrared sensor in the wearable audio device.
In step S530, audio calculation and audio output are performed based on the distance between the speaker and the ear of the user.
The audio processing method 500 according to the embodiment of the present application can be performed by the processor (120 or 220) of the wearable audio device (100 or 200) according to the embodiment of the present application, which has been described in detail above, and here, for brevity, detailed description is omitted, and only some main operations are described.
In an embodiment of the present application, the method 500 may further include: whether the user is in a motion state is determined based on output data of an acceleration sensor in the wearable audio device, and audio calculation and audio output are performed based on a distance between a speaker and an ear when it is determined that the user is in a non-motion state.
In an embodiment of the present application, the method 500 may further include: and when the user is determined to be in the motion state, acquiring a preset audio curve, adjusting the preset audio curve based on the distance between the loudspeaker and the ear of the user, and performing audio calculation and audio output based on the adjusted audio curve.
In an embodiment of the present application, adjusting the preset audio curve based on a distance between the speaker and the ear of the user may include: and adjusting the amplitude of a preset frequency band or a preset frequency point in a preset audio curve based on the distance between the loudspeaker and the ear of the user.
In an embodiment of the application, adjusting the audio curve based on the distance between the speaker and the ear of the user may include: the frequency band or frequency point to be adjusted is determined based on the distance between the speaker and the ear of the user, and the amplitude of the frequency band or frequency point in the preset audio curve is adjusted based on the distance between the speaker and the ear of the user.
In an embodiment of the present application, the method 500 may further include: user input is obtained and it is determined whether to perform an audio calculation based on the user input.
Furthermore, according to still another aspect of the present application, there is provided a storage medium having a computer program stored thereon, where the computer program is executed by a processor to execute the audio processing method according to the embodiment of the present application.
Based on the above description, the wearable audio device and the audio processing method according to the embodiments of the application can provide adaptive audio processing according to different scenes, so that a user can meet the requirements of sound quality in various scenes; in addition, due to the fact that the audio is processed in a self-adaptive mode by combining the multi-channel sensor data, high-precision scene detection results can be obtained, and high-adaptability audio output is provided.
Although the example embodiments have been described herein with reference to the accompanying drawings, it is to be understood that the above-described example embodiments are merely illustrative and are not intended to limit the scope of the present application thereto. Various changes and modifications may be effected therein by one of ordinary skill in the pertinent art without departing from the scope or spirit of the present application. All such changes and modifications are intended to be included within the scope of the present application as claimed in the appended claims.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described device embodiments are merely illustrative, and for example, the division of the units is only one logical functional division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another device, or some features may be omitted, or not executed.
In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the application may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the description of exemplary embodiments of the present application, various features of the present application are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the application and aiding in the understanding of one or more of the various inventive aspects. However, the method of the present application should not be construed to reflect the intent: this application is intended to cover such departures from the present disclosure as come within known or customary practice in the art to which this invention pertains. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this application.
It will be understood by those skilled in the art that all of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where such features are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Moreover, those of skill in the art will understand that although some embodiments described herein include some but not other features included in other embodiments, combinations of features of different embodiments are meant to be within the scope of the application and form different embodiments. For example, in the claims, any of the claimed embodiments may be used in any combination.
The various component embodiments of the present application may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that a microprocessor or Digital Signal Processor (DSP) may be used in practice to implement some or all of the functionality of some of the modules according to embodiments of the present application. The present application may also be embodied as apparatus programs (e.g., computer programs and computer program products) for performing a portion or all of the methods described herein. Such programs implementing the present application may be stored on a computer readable medium or may be in the form of one or more signals. Such a signal may be downloaded from an internet website, or provided on a carrier signal, or provided in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the application, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The application may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means can be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.
The above description is only for the specific embodiments of the present application or descriptions thereof, and the protection scope of the present application is not limited thereto, and any person skilled in the art can easily think of the changes or substitutions within the technical scope of the present application, and should be covered by the protection scope of the present application. The protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A wearable audio device, comprising a processor, a distance sensor, an infrared sensor, and a speaker, wherein:
the processor is used for determining the tightness state of the wearable audio device worn by a user based on the output data of the distance sensor;
the processor is further configured to determine a distance between the speaker and the user's ear based on the tightness and output data of the infrared sensor;
the processor is further configured to perform audio calculations and audio outputs based on the distance.
2. The wearable audio device of claim 1, further comprising an acceleration sensor, the processor further configured to:
and determining whether the user is in a motion state based on the output data of the acceleration sensor, and performing audio calculation and audio output based on the distance when the user is determined to be in a non-motion state.
3. The wearable audio device of claim 2, wherein the processor is further configured to:
when the user is determined to be in the motion state, a preset audio curve is obtained, the preset audio curve is adjusted based on the distance between the loudspeaker and the ear of the user, and audio calculation and audio output are carried out based on the adjusted audio curve.
4. The wearable audio device of claim 3, wherein the processor adjusts the preset audio profile based on a distance between the speaker and an ear of the user, comprising:
adjusting the amplitude of a preset frequency band or a preset frequency point in the preset audio curve based on the distance between the loudspeaker and the ear of the user.
5. The wearable audio device of claim 3 wherein the processor adjusts the preset audio profile based on a distance between the speaker and the user's ear comprising:
determining a frequency band or a frequency point to be adjusted based on a distance between the speaker and the ear of the user, and adjusting an amplitude of the frequency band or the frequency point in the preset audio curve based on the distance between the speaker and the ear of the user.
6. A method of audio processing, the method comprising:
determining the tightness state of the wearable audio device worn by a user based on output data of a distance sensor in the wearable audio device;
determining a distance between a speaker of the wearable audio device and an ear of the user based on the tightness state and output data of an infrared sensor in the wearable audio device;
and performing audio calculation and audio output based on the distance.
7. The method of claim 6, further comprising:
determining whether the user is in a motion state based on output data of an acceleration sensor in the wearable audio device, and performing audio calculation and audio output based on the distance when it is determined that the user is in a non-motion state.
8. The method of claim 7, further comprising:
when the user is determined to be in the motion state, a preset audio curve is obtained, the preset audio curve is adjusted based on the distance between the loudspeaker and the ear of the user, and audio calculation and audio output are carried out based on the adjusted audio curve.
9. The method of claim 8, wherein the adjusting the preset audio curve based on the distance between the speaker and the user's ear comprises:
adjusting the amplitude of a preset frequency band or a preset frequency point in the preset audio curve based on the distance between the loudspeaker and the ear of the user.
10. The method of claim 8, wherein the adjusting the preset audio curve based on the distance between the speaker and the user's ear comprises:
determining a frequency band or a frequency point to be adjusted based on a distance between the speaker and the ear of the user, and adjusting an amplitude of the frequency band or the frequency point in the preset audio curve based on the distance between the speaker and the ear of the user.
CN202210770274.3A 2022-06-30 2022-06-30 Wearable audio device and audio processing method Pending CN115175045A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210770274.3A CN115175045A (en) 2022-06-30 2022-06-30 Wearable audio device and audio processing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210770274.3A CN115175045A (en) 2022-06-30 2022-06-30 Wearable audio device and audio processing method

Publications (1)

Publication Number Publication Date
CN115175045A true CN115175045A (en) 2022-10-11

Family

ID=83489385

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210770274.3A Pending CN115175045A (en) 2022-06-30 2022-06-30 Wearable audio device and audio processing method

Country Status (1)

Country Link
CN (1) CN115175045A (en)

Similar Documents

Publication Publication Date Title
CN110972014B (en) Parameter adjustment method and device for active noise reduction earphone and wireless earphone
KR102594155B1 (en) Controlling perceived ambient sounds based on focus level
JP6096993B1 (en) Earphone sound effect compensation method, apparatus, and earphone
US9579029B2 (en) Heart rate detection method used in earphone and earphone capable of detecting heart rate
US8787584B2 (en) Audio metrics for head-related transfer function (HRTF) selection or adaptation
CN111988690A (en) Earphone wearing state detection method and device and earphone
KR101833756B1 (en) Ear pressure sensors integrated with speakers for smart sound level exposure
CN114143646B (en) Detection method, detection device, earphone and readable storage medium
US11115743B2 (en) Signal processing device, signal processing method, and program
CN109688498B (en) Volume adjusting method, earphone and storage medium
US20160212530A1 (en) Heart Rate Detection Method Used In Earphone And Earphone Capable Of Detecting Heart Rate
US11832082B2 (en) Self-calibrating microphone and loudspeaker arrays for wearable audio devices
US10104459B2 (en) Audio system with conceal detection or calibration
US11595764B2 (en) Tuning method, manufacturing method, computer-readable storage medium and tuning system
CN111083591A (en) Configuration method and device of noise reduction earphone and noise reduction earphone
CN106851459A (en) In-Ear Headphones
WO2022247673A1 (en) Test method and apparatus, and earphone and computer-readable storage medium
CN114071308B (en) Headset self-adaptive tuning method and device, headset and readable storage medium
CN115175045A (en) Wearable audio device and audio processing method
CN108834015A (en) Earphone and its volume adjusting method and device
CN109068225B (en) Earphone, control method thereof and computer storage medium
CN113852889B (en) Active noise reduction method and device and electronic equipment
CN117692843B (en) Sound automatic adjusting method, system, storage medium and electronic equipment
US11032633B2 (en) Method of adjusting tone and tone-adjustable earphone
CN115175046A (en) Earphone noise reduction method and device, electronic equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination