CN113613142A - Audio processing method and wireless earphone - Google Patents

Audio processing method and wireless earphone Download PDF

Info

Publication number
CN113613142A
CN113613142A CN202110945332.7A CN202110945332A CN113613142A CN 113613142 A CN113613142 A CN 113613142A CN 202110945332 A CN202110945332 A CN 202110945332A CN 113613142 A CN113613142 A CN 113613142A
Authority
CN
China
Prior art keywords
audio
audio file
wireless headset
played
wireless
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202110945332.7A
Other languages
Chinese (zh)
Inventor
王勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zhongke Shengkong Technology Co ltd
Original Assignee
Beijing Zhongke Shengkong Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zhongke Shengkong Technology Co ltd filed Critical Beijing Zhongke Shengkong Technology Co ltd
Priority to CN202110945332.7A priority Critical patent/CN113613142A/en
Publication of CN113613142A publication Critical patent/CN113613142A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B20/00Signal processing not specific to the method of recording or reproducing; Circuits therefor
    • G11B20/10Digital recording or reproducing
    • G11B20/10009Improvement or modification of read or write signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones
    • H04R1/1091Details not provided for in groups H04R1/1008 - H04R1/1083

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Circuit For Audible Band Transducer (AREA)
  • Telephone Function (AREA)

Abstract

The application provides an audio processing method and a wireless earphone, and relates to the technical field of signal processing. The audio processing method is applied to a wireless earphone comprising an audio module, wherein the audio module stores audio files, and the audio processing method comprises the following steps: determining control instructions for the audio file based on the wireless headset and/or an electronic device connected with the wireless headset; and controlling the wireless headset based on the control instruction. The audio processing method is applied to the wireless earphone, so that the phenomenon of tone quality reduction caused by processes such as wireless transmission and the like can be avoided on the premise that the wireless earphone has other wireless earphone functions, and a user can experience high-quality music.

Description

Audio processing method and wireless earphone
Technical Field
The present application relates to the field of signal processing technologies, and in particular, to an audio processing method and a wireless headset.
Background
In recent years, Wireless headsets have received increasing attention, particularly True Wireless Stereo (TWS) headsets, by virtue of the ability to free users from the constraints of headset cords.
In general, the precondition for playing audio by a wireless headset is that an equipment end (for example, a mobile phone end) first encodes and compresses an audio file, and then transmits the audio file to the wireless headset through equipment bluetooth, and the wireless headset performs decoding operation on the compressed audio file. Because of the coding compression operations involved, existing wireless headsets typically have poor sound quality compared to wired headsets.
Disclosure of Invention
The present application is proposed to solve the above-mentioned technical problems. The embodiment of the application provides an audio processing method and a wireless earphone.
In a first aspect, an embodiment of the present application provides an audio processing method, which is applied to a wireless headset including an audio module, where an audio file is stored in the audio module. The audio processing method comprises the following steps: determining a control instruction for the audio file based on the wireless headset and/or an electronic device connected with the wireless headset, wherein the control instruction is used for controlling the wireless headset; and controlling the wireless headset based on the control instruction.
With reference to the first aspect, in certain implementations of the first aspect, the audio module includes a headphone local equalizer. Before controlling the wireless headset based on the control instruction, the method further comprises: and determining the audio file to be played based on the control instruction and the audio file stored in the audio module. Wherein, control wireless earphone based on control command includes: and controlling the earphone local equalizer to determine an equalization parameter corresponding to the audio file to be played based on the control instruction, so that the earphone local equalizer performs spectrum equalization operation on the audio file to be played based on the equalization parameter, and controlling the wireless earphone to play the audio file to be played after the spectrum equalization operation is performed.
With reference to the first aspect, in some implementations of the first aspect, determining an equalization parameter corresponding to an audio file to be played includes: determining target frequency response information of an earphone corresponding to an audio file to be played; determining the loudspeaker path transfer function information of the wireless earphone; and determining the balance parameters based on the target frequency response information of the earphone and the transfer function information of the loudspeaking path.
With reference to the first aspect, in some implementations of the first aspect, the determining target frequency response information of an earphone corresponding to an audio file to be played includes: and inputting the audio file to be played into the target frequency response model to determine the target frequency response information of the earphone.
With reference to the first aspect, in some implementations of the first aspect, determining an equalization parameter corresponding to an audio file to be played includes: acquiring a user-defined balance parameter of a user; and determining the balance parameters corresponding to the audio file to be played based on the self-defined balance parameters.
With reference to the first aspect, in certain implementations of the first aspect, the control instructions include audio switching instructions, the determining control instructions for the audio file based on the wireless headset and/or an electronic device connected with the wireless headset includes at least one of: determining an audio switching instruction based on the user's motion manipulation information for the wireless headset; the audio switching instruction is determined based on instruction information of a user for the electronic device.
In a second aspect, an embodiment of the present application provides an audio processing method, which is applied to an electronic device connected to a wireless headset, where the wireless headset includes an audio module, and the audio module stores an audio file. The audio processing method comprises the following steps: generating a control instruction for the audio file based on the user operation information, wherein the control instruction is used for controlling the wireless earphone; and sending the control instruction to the wireless earphone so as to control the wireless earphone by using the control instruction.
In a third aspect, an embodiment of the present application provides a wireless headset, including: the audio module stores audio files; the device comprises a determining module, a processing module and a processing module, wherein the determining module is used for determining a control instruction for an audio file based on a wireless earphone and/or an electronic device connected with the wireless earphone, and the control instruction is used for controlling the wireless earphone; and the control module is used for controlling the wireless earphone based on the control instruction.
With reference to the third aspect, in some implementation manners of the third aspect, the audio module includes a local equalizer of the headphone, where the local equalizer of the headphone is configured to determine an equalization parameter corresponding to the audio file to be played, and perform a spectrum equalization operation on the audio file to be played based on the equalization parameter, where the audio file to be played is determined based on the audio file stored in the audio module.
With reference to the third aspect, in certain implementations of the third aspect, the wireless headset further includes an extension interface connected to the audio module, the extension interface configured to write a new audio file to the audio module.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, which stores a computer program for executing the audio processing method mentioned in the first aspect and/or the second aspect.
In a fifth aspect, an embodiment of the present application provides a computer device, including: a processor; a memory for storing the processor-executable instructions; the processor is configured to perform the audio processing method according to the first aspect and/or the second aspect.
According to the embodiment of the application, the audio file is directly burnt inside the wireless earphone, and then the mode of controlling the wireless earphone based on the wireless earphone and/or the control instruction determined by the electronic equipment connected with the wireless earphone is adopted, so that the distortion condition of the audio file caused by a wireless transmission mode (such as a Bluetooth wireless transmission mode) is avoided, and the tone quality experience of the wireless earphone is improved. That is to say, this application embodiment can make wireless earphone possess under the prerequisite of other wireless earphone functions, avoids the tone quality reduction phenomenon that processes such as wireless transmission lead to, and then makes the user experience high-quality music.
Drawings
The above and other objects, features and advantages of the present application will become more apparent by describing in more detail embodiments of the present application with reference to the attached drawings. The accompanying drawings are included to provide a further understanding of the embodiments of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the principles of the application. In the drawings, like reference numbers generally represent like parts or steps.
Fig. 1 is a schematic flowchart illustrating an audio processing method according to an exemplary embodiment of the present application.
Fig. 2 is a schematic flowchart illustrating an audio processing method according to another exemplary embodiment of the present application.
Fig. 3 is a schematic flowchart illustrating a process of determining an equalization parameter corresponding to an audio file to be played according to an exemplary embodiment of the present application.
Fig. 4 is a schematic flowchart illustrating a process of determining an equalization parameter corresponding to an audio file to be played according to another exemplary embodiment of the present application.
Fig. 5 is a schematic flowchart illustrating a process of determining an equalization parameter corresponding to an audio file to be played according to another exemplary embodiment of the present application.
Fig. 6 is a flowchart illustrating a process of determining control instructions for an audio file based on a wireless headset and/or an electronic device connected to the wireless headset according to an exemplary embodiment of the present application.
Fig. 7 is a flowchart illustrating an audio processing method according to another exemplary embodiment of the present application.
Fig. 8 is a schematic structural diagram of a wireless headset according to an exemplary embodiment of the present application.
Fig. 9 is a schematic structural diagram of a wireless headset according to another exemplary embodiment of the present application.
Fig. 10 is a schematic structural diagram of a wireless headset according to another exemplary embodiment of the present application.
Fig. 11 is a schematic structural diagram of a computer device according to an exemplary embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Fig. 1 is a schematic flowchart illustrating an audio processing method according to an exemplary embodiment of the present application. Specifically, the audio processing method provided by the embodiment of the present application is applied to a wireless headset including an audio module, where an audio file is stored in the audio module.
As shown in fig. 1, an audio processing method provided in an embodiment of the present application includes the following steps.
Step S10, determining a control instruction for the audio file based on the wireless headset and/or the electronic device connected with the wireless headset, wherein the control instruction is used for controlling the wireless headset.
Illustratively, the electronic device is a mobile phone or a tablet computer connected with the wireless headset, and a user issues a control instruction for an audio file by manipulating the mobile phone or the tablet computer, where the control instruction may include at least one of an audio switching instruction, a pause instruction, a play instruction, and a volume adjustment instruction.
And step S30, controlling the wireless headset based on the control instruction.
Illustratively, the wireless headset is controlled to perform operations related to the control instructions based on the control instructions. For example, at least one of the operations of audio switching, pausing, playing and volume adjusting is performed in correspondence with the control instruction.
In the practical application process, a control instruction for the audio file is firstly determined based on the wireless headset and/or the electronic device connected with the wireless headset, wherein the control instruction is used for controlling the wireless headset, and then the wireless headset is controlled based on the control instruction.
According to the embodiment of the application, the audio file is directly burnt inside the wireless earphone, and then the mode of controlling the wireless earphone based on the wireless earphone and/or the control instruction determined by the electronic equipment connected with the wireless earphone is adopted, so that the distortion condition of the audio file caused by a wireless transmission mode (such as a Bluetooth wireless transmission mode) is avoided, and the tone quality experience of the wireless earphone is improved. That is to say, this application embodiment can make wireless earphone possess under the prerequisite of other wireless earphone functions, avoids the tone quality reduction phenomenon that processes such as wireless transmission lead to, and then makes the user experience high-quality music.
Fig. 2 is a schematic flowchart illustrating an audio processing method according to another exemplary embodiment of the present application. The embodiment shown in fig. 2 is extended based on the embodiment shown in fig. 1, and the differences between the embodiment shown in fig. 2 and the embodiment shown in fig. 1 will be emphasized below, and the descriptions of the same parts will not be repeated.
As shown in fig. 2, in the audio processing method provided in the embodiment of the present application, before the step of controlling the wireless headset based on the control instruction, the following steps are further included.
In step S20, an audio file to be played is determined based on the control instruction and the audio file stored in the audio module.
Illustratively, a user sends a control instruction for an audio file within a range of the audio file stored in the audio module by operating a mobile phone or a tablet computer, where the control instruction includes a selection instruction and a confirmation instruction for the audio file, and then the wireless headset determines the audio file to be played selected by the user based on the control instruction.
And, in this application embodiment, the step of controlling the wireless headset based on the control instruction includes the following steps.
And step S31, controlling the earphone local equalizer to determine the equalization parameter corresponding to the audio file to be played based on the control instruction, so that the earphone local equalizer performs the spectrum equalization operation on the audio file to be played based on the equalization parameter, and controlling the wireless earphone to play the audio file to be played after the spectrum equalization operation.
Illustratively, the audio module includes a headphone local equalizer. The local equalizer of the earphone can be obtained by setting an FIR filter or an IIR filter. The earphone local equalizer determines the related information of the music type, the music style, the musicians, the albums, the single music and even the words and sentences in the single music of the audio file to be played based on the control instruction, determines the equalization parameters of the audio file to be played based on the related information, and then performs the spectrum equalization operation on the audio file to be played, so that the wireless earphone can play the audio file to be played after the spectrum equalization operation.
In the practical application process, a control instruction for an audio file is determined based on the wireless earphone and/or the electronic device connected with the wireless earphone, then the audio file to be played is determined based on the control instruction and the audio file stored in the audio module, and then the local equalizer of the earphone is controlled based on the control instruction to determine an equalization parameter corresponding to the audio file to be played, so that the local equalizer of the earphone performs a spectrum equalization operation on the audio file to be played based on the equalization parameter, and the wireless earphone is controlled to play the audio file to be played after the spectrum equalization operation.
According to the audio processing method provided by the embodiment of the application, the equalization parameters corresponding to the audio file to be played are determined, so that the wireless earphone can perform spectrum equalization operation on the audio file to be played based on the equalization parameters, and the purpose of further improving the tone quality experience of the wireless earphone is achieved. Specifically, the audio processing method provided by the embodiment of the application is applied to the wireless headset, so that not only is the tone quality reduction caused by the processes of wireless transmission and the like avoided, but also the tone quality of the audio file to be played is further optimized in a mode of carrying out balanced adjustment on the audio file, so that the audio file to be played is played in a state of optimizing the tone quality (such as the best tone quality), and the requirement of a user for experiencing music with higher quality is further met.
Fig. 3 is a schematic flowchart illustrating a process of determining an equalization parameter corresponding to an audio file to be played according to an exemplary embodiment of the present application. The embodiment shown in fig. 3 is extended based on the embodiment shown in fig. 2, and the differences between the embodiment shown in fig. 3 and the embodiment shown in fig. 2 will be emphasized below, and the descriptions of the same parts will not be repeated.
As shown in fig. 3, in the audio processing method provided in the embodiment of the present application, the step of determining the equalization parameter corresponding to the audio file to be played includes the following steps.
Step S311, determining target frequency response information of the earphone corresponding to the audio file to be played.
Illustratively, the earphone target frequency response information is the frequency response of the system (including circuitry and space) through which the target sound source (e.g., binary data stored in the audio module) is converted from an electrical signal to an acoustic signal received by the eardrum of the human ear. For example, the headphone target frequency response information is determined based on a Harman curve or a Pittsmet curve.
The target frequency response information of the earphone can be fixed and unchangeable, and the equalization parameters can be updated through new earphone firmware issued by the cloud so as to adapt to the current Harman curve or the Vomette curve. The target headphone frequency response information may also be variable, such as information related to the genre of music, style of music, musicians, albums, single music, or even words and phrases in the single music of the audio file to be played. That is, it can be understood that different audio files to be played have different headphone target frequency response information. For example, when the playing audio of the wireless headset is switched from the audio file a to the audio file B, the target frequency response information of the headset is also switched accordingly, and the target frequency response information of the headset corresponding to the audio file a is switched to the target frequency response information of the headset corresponding to the audio file B.
For example, when the target frequency response information of the headphones may vary, the user may determine the target frequency response information of the headphones corresponding to the audio file to be played in the following manner. The following modes are explained by taking an example of switching the audio file a to the audio file B.
a. And the wireless earphone locally responds to the switching instruction, obtains earphone target frequency response information corresponding to the audio file B, and then the wireless earphone locally determines and adjusts balance parameters according to the earphone target frequency response information corresponding to the audio file B.
b. And the wireless earphone locally determines and adjusts the balance parameters according to the earphone target frequency response information corresponding to the audio file B.
c. And the user switches the audio files through the wireless earphone application program, the electronic equipment terminal obtains the earphone target frequency response information corresponding to the audio file B, and determines the balance parameters according to the earphone target frequency response information corresponding to the audio file B, so that the balance parameters are sent to the wireless earphone locally to be adjusted.
In step S312, the speaker path transfer function information of the wireless headset is determined.
Illustratively, the speaker path transfer function information is determined based on a frequency response of an electro-acoustic conversion system between the speaker input to the eardrum of the human ear.
In step S313, an equalization parameter is determined based on the headphone target frequency response information and the speaker path transfer function information.
Illustratively, the target frequency response information of the earphone is tg (f), the transfer function information of the loudspeaker path is g (f), and the equalization parameter is EQe(f) In that respect Then, the ratio of the target frequency response information of the earphone to the transfer function information of the loudspeaking path is the equalization parameter, i.e. the following formula (1).
Figure BDA0003216461500000081
In the practical application process, the target frequency response information of the earphone corresponding to the audio file to be played is determined, the transfer function information of the loudspeaking path of the wireless earphone is determined, and then the balance parameter is determined based on the target frequency response information of the earphone and the transfer function information of the loudspeaking path.
The audio processing method provided by the embodiment of the application can achieve the purpose of determining the balance parameters corresponding to the audio file to be played. According to the method and the device, the tone quality of the audio file to be played can be adjusted to be optimal in a mode of performing spectrum balancing operation on the audio file to be played, and therefore the audio file to be played can be played in the state of the optimal tone quality.
Fig. 4 is a schematic flowchart illustrating a process of determining an equalization parameter corresponding to an audio file to be played according to another exemplary embodiment of the present application. The embodiment shown in fig. 4 is extended based on the embodiment shown in fig. 3, and the differences between the embodiment shown in fig. 4 and the embodiment shown in fig. 3 will be emphasized below, and the descriptions of the same parts will not be repeated.
As shown in fig. 4, in the audio processing method provided in the embodiment of the present application, the step of determining the target frequency response information of the earphone corresponding to the audio file to be played includes the following steps.
Step S3111, inputting the audio file to be played into the target frequency response model to determine the target frequency response information of the earphone.
Illustratively, the target frequency response model mentioned in step S3111 is a deep learning network model, and the target frequency response model is obtained by training through a model training method. Specifically, an initial network model is established, an audio file sample to be played and earphone target frequency response information corresponding to the audio file sample to be played are determined to generate training data, then the initial network model is trained based on the generated training data, model parameters of the initial network model are adjusted, and finally the target frequency response model is generated.
In some embodiments, the input data of the target frequency response model is an audio file to be played, and the output data is the target frequency response information of the earphone corresponding to the audio file to be played.
In the practical application process, firstly, an audio file to be played is input into a target frequency response model to determine target frequency response information of the earphone, then, the loud-speaking path transfer function information of the wireless earphone is determined, and then, the balance parameter is determined based on the target frequency response information of the earphone and the loud-speaking path transfer function information.
The audio processing method provided by the embodiment of the application can achieve the purpose of determining the balance parameters corresponding to the audio file to be played. Particularly, the method for determining the target frequency response information of the earphone corresponding to the audio file to be played is obtained by inputting the audio file to be played into the target frequency response model, and the characteristics of the target frequency response model can be customized based on the training data, so that the method can meet the personalized requirements of the determined target frequency response information of the earphone, and can improve the accuracy of the determined target frequency response information of the earphone to a certain extent.
To further satisfy the impulse requirements of the user, the embodiment shown in fig. 5 described below describes a scheme for user-defining equalization parameters.
Specifically, fig. 5 is a schematic flowchart illustrating a process of determining an equalization parameter corresponding to an audio file to be played according to another exemplary embodiment of the present application. The embodiment shown in fig. 5 is extended based on the embodiment shown in fig. 2, and the differences between the embodiment shown in fig. 5 and the embodiment shown in fig. 2 will be emphasized below, and the descriptions of the same parts will not be repeated.
As shown in fig. 5, in the audio processing method provided in the embodiment of the present application, the step of determining the equalization parameter corresponding to the audio file to be played includes the following steps.
Step S314, obtaining the user-defined equalization parameters of the user.
In one embodiment of the present application, the user-defined equalization parameters are generated by a user subjectively adjusting a virtual equalizer. Illustratively, the user adjusts the virtual equalizer in the wireless headset application according to the personalized requirements, i.e., the corresponding customized equalization parameters can be generated.
Step S315, determining the equalization parameter corresponding to the audio file to be played based on the self-defined equalization parameter.
In an embodiment of the present application, after obtaining the user-defined equalization parameter of the user, the electronic device sends the user-defined equalization parameter to the local wireless headset, and the wireless headset determines the user-defined adjusted equalization parameter based on the user-defined equalization parameter and the equalization parameter before user-defined adjustment (i.e., the equalization parameter corresponding to the audio file to be played mentioned in step S315).
Illustratively, the user's custom equalization parameters are EQa(f) The target frequency response information of the earphone is Tg (f), and the equalization parameter before user-defined adjustment is EQe(f) The user-defined adjusted equalization parameter is EQ'e(f) The formula is shown below.
EQa(f)·EQe(f)·G(f)=EQ’e(f)·G(f)=EQa(f)·Tg(f) (2)
As can be seen from equation (2), when the user does not adjust the virtual equalizer, the equalization parameter is set to
Figure BDA0003216461500000101
The target earphone frequency response information is Tg (f), and after the user adjusts the virtual equalizer, the adjusted equalization parameter is set to EQ'e(f)=EQa(f)·EQe(f) The adjusted headphone target frequency response information is set to Tg' (f) ═ EQa(f) Tg (f), wherein Tg' (f) can be regarded as the earphone target frequency response information Tg (f) after user-defined adjustment.
According to the audio processing method provided by the embodiment of the application, the purpose of determining the equalization parameters corresponding to the audio file to be played is achieved by acquiring the user-defined equalization parameters of the user and then determining the equalization parameters corresponding to the audio file to be played based on the user-defined equalization parameters. Particularly, the user can be supported to customize the equalization parameters, so that the embodiment of the application meets the improvisation requirement of the user, and the hearing experience of the user is further improved.
Fig. 6 is a flowchart illustrating a process of determining control instructions for an audio file based on a wireless headset and/or an electronic device connected to the wireless headset according to an exemplary embodiment of the present application. The embodiment shown in fig. 6 is extended based on the embodiment shown in fig. 1, and the differences between the embodiment shown in fig. 6 and the embodiment shown in fig. 1 will be emphasized below, and the descriptions of the same parts will not be repeated.
As shown in fig. 6, in the audio processing method provided in the embodiment of the present application, the control instruction includes an audio switching instruction, and the step of determining the control instruction for the audio file based on the wireless headset and/or the electronic device connected to the wireless headset includes at least one of the following steps.
In step S11, an audio switching instruction is determined based on the motion manipulation information of the user for the wireless headset.
Illustratively, the motion manipulation information includes gesture manipulation information, i.e., a user can manipulate the wireless headset through a gesture to perform a switching operation of an audio file. For example, the user determines to switch to the previous audio file by making an upward gesture and to switch to the next audio file by making a downward gesture within a range perceivable by the wireless headset. Furthermore, the wireless earphone locally responds to the switching instruction, and obtains earphone target frequency response information corresponding to the audio file to be switched, and then the wireless earphone determines and adjusts the balance parameters according to the earphone target frequency response information.
Illustratively, the action manipulation information includes key manipulation information, i.e., the user may determine an audio switch instruction by pressing a key on the wireless headset, e.g., the user determines to switch to a previous audio file by pressing an "up" key in the wireless headset interface and determines to switch to a next audio file by pressing a "down" key in the wireless headset interface.
In step S12, an audio switching instruction is determined based on instruction information of the user for the electronic device.
Illustratively, the user may switch audio by controlling the wireless headset in a motion-or voice-operated manner with the electronic device. For example, a user may switch to a previous audio file by saying "up" to the electronic device and may switch to a next audio file by saying "down" to the electronic device. For another example, the user determines to switch to the previous audio file by pressing the up key in the interface of the electronic device, and determines to switch to the next audio file by pressing the down key in the interface of the electronic device.
According to the audio processing method provided by the embodiment of the application, the purpose of determining the control instruction for the audio file based on the wireless earphone and/or the electronic equipment connected with the wireless earphone is achieved by determining the audio switching instruction based on the action control information of the user for the wireless earphone and/or determining the audio switching instruction based on the instruction information of the user for the electronic equipment, the control mode of the wireless earphone is enriched, and the user experience good feeling is further improved.
Fig. 7 is a flowchart illustrating an audio processing method according to another exemplary embodiment of the present application. Specifically, the audio processing method provided by the embodiment of the present application is applied to an electronic device connected to a wireless headset, where the wireless headset includes an audio module, and the audio module stores an audio file.
As shown in fig. 7, an audio processing method provided in an embodiment of the present application includes the following steps.
In step S40, a control instruction for the audio file is generated based on the user operation information. The control instructions are used for controlling the wireless headset.
Illustratively, a user issues a control instruction for an audio file by manipulating an electronic device (such as a mobile phone or a tablet computer) connected to the wireless headset, where the control instruction may include a switching instruction, a pause instruction, a play instruction, a volume adjustment instruction, and the like of the audio file.
And step S50, sending the control command to the wireless earphone so as to control the wireless earphone by using the control command.
Illustratively, the electronic device sends the control instruction to the wireless headset, and controls switching, pausing, playing, volume adjusting and the like of the audio file of the wireless headset based on the control instruction.
In the practical application process, a control instruction for the audio file is generated based on the user operation information, wherein the control instruction is used for controlling the wireless earphone, and then the control instruction is sent to the wireless earphone so as to control the wireless earphone by using the control instruction.
According to the audio processing method provided by the embodiment of the application, the control instruction for the audio file is generated based on the user operation information, and then the control instruction is sent to the wireless earphone, so that the purpose of improving the tone quality experience of the wireless earphone is achieved by using the mode that the control instruction controls the wireless earphone. The audio processing method provided by the embodiment of the application is applied to the electronic equipment connected with the wireless earphone, so that the phenomenon of tone quality reduction caused by processes such as wireless transmission and the like can be avoided on the premise that the wireless earphone has the functions of other wireless earphones, and a user can experience high-quality music.
Fig. 8 is a schematic structural diagram of a wireless headset according to an exemplary embodiment of the present application. As shown in fig. 8, the wireless headset provided in the embodiment of the present application includes an audio module 100, a determination module 200, and a control module 300. The audio module 100 stores therein an audio file. The determination module 200 is configured to determine a control instruction for an audio file based on a wireless headset and/or an electronic device connected to the wireless headset. The control module 300 is used for controlling the wireless headset based on the control instruction.
The wireless headset provided by the embodiment of the application comprises an audio module, wherein an audio file is stored in the audio module. Because the embodiment of the application directly burns the audio file into the wireless earphone, the distortion of the audio file caused by a wireless transmission mode (such as a Bluetooth wireless transmission mode) is avoided, and the tone quality experience of the wireless earphone is improved.
Fig. 9 is a schematic structural diagram of a wireless headset according to another exemplary embodiment of the present application. The embodiment shown in fig. 9 is extended based on the embodiment shown in fig. 8, and the differences between the embodiment shown in fig. 9 and the embodiment shown in fig. 8 will be emphasized below, and the descriptions of the same parts will not be repeated.
As shown in fig. 9, in the present embodiment, the audio module 100 includes a headphone local equalizer 110. The local equalizer 110 is configured to determine an equalization parameter corresponding to the audio file to be played, and perform a spectrum equalization operation on the audio file to be played based on the equalization parameter. The audio file to be played is determined based on the audio file stored in the audio module.
It should be understood that the operations and functions of the determining module 200, the control module 300 and the local equalizer 110 of the wireless headset provided in fig. 8 and 9 may refer to the audio processing method provided in fig. 1 to 6, and are not described herein again to avoid repetition.
Fig. 10 is a schematic structural diagram of a wireless headset according to another exemplary embodiment of the present application. The embodiment shown in fig. 10 is extended based on the embodiment shown in fig. 8, and the differences between the embodiment shown in fig. 10 and the embodiment shown in fig. 8 will be emphasized below, and the descriptions of the same parts will not be repeated.
As shown in fig. 10, the wireless headset provided in the embodiment of the present application further includes an expansion interface 400 connected to the audio module 100. The extension interface 400 is used to write a new audio file to the audio module 100.
The wireless earphone provided by the embodiment of the application can write in new audio files into the audio module by means of the expansion interface, and further enriches the audio file writing mode of the wireless earphone. Illustratively, the extended interface includes USB Type-C, Lightning or the like.
Next, a computer apparatus according to an embodiment of the present application is described with reference to fig. 11. Fig. 11 is a schematic structural diagram of a computer device according to an exemplary embodiment of the present application.
As shown in fig. 11, a computer device 500 provided by embodiments of the present application includes one or more processors 510 and memory 520.
Processor 510 may be a Central Processing Unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in computer device 500 to perform desired functions.
Memory 520 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, Random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, Read Only Memory (ROM), hard disk, flash memory, etc. One or more computer program instructions may be stored on the computer-readable storage medium and executed by processor 510 to implement the audio processing methods of the various embodiments of the present application described above and/or other desired functions. Various contents such as audio files may also be stored in the computer-readable storage medium.
In one example, computer device 500 may further include: an input device 530 and an output device 540, which are interconnected by a bus system and/or other form of connection mechanism (not shown).
The input device 530 may include, for example, an audio switch key or the like.
The output device 540 may output various information including an audio file after a spectrum equalization operation, etc. to the outside. The output device 540 may include, for example, a display, a communication network, speakers, a remote output device connected thereto, and so forth.
Of course, for simplicity, only some of the components of the computer device 500 relevant to the present application are shown in fig. 11, omitting components such as buses, input/output interfaces, and the like. In addition, computer device 500 may include any other suitable components depending on the particular application.
Illustratively, the computer device 500 may be at least one of a speaker, a stylus, and a hearing aid.
In addition to the above-described methods and apparatus, embodiments of the present application may also be a computer program product comprising computer program instructions that, when executed by a processor, cause the processor to perform the steps in the audio processing method according to various embodiments of the present application described in the above-mentioned "exemplary methods" section of this specification.
The computer program product may be written with program code for performing the operations of embodiments of the present application in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server.
Furthermore, the present application may also be a computer-readable storage medium having stored thereon computer program instructions, which, when executed by a processor, cause the processor to perform the steps in the audio processing method according to various embodiments of the present application described in the "exemplary methods" section above in this specification.
The computer-readable storage medium may take any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may include, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing describes the general principles of the present application in conjunction with specific embodiments, however, it is noted that the advantages, effects, etc. mentioned in the present application are merely examples and are not limiting, and they should not be considered essential to the various embodiments of the present application. Furthermore, the foregoing disclosure of specific details is for the purpose of illustration and description and is not intended to be limiting, since the foregoing disclosure is not intended to be exhaustive or to limit the disclosure to the precise details disclosed.
The block diagrams of devices, apparatuses, systems referred to in this application are only given as illustrative examples and are not intended to require or imply that the connections, arrangements, configurations, etc. must be made in the manner shown in the block diagrams. These devices, apparatuses, devices, systems may be connected, arranged, configured in any manner, as will be appreciated by those skilled in the art. Words such as "including," "comprising," "having," and the like are open-ended words that mean "including, but not limited to," and are used interchangeably therewith. The words "or" and "as used herein mean, and are used interchangeably with, the word" and/or, "unless the context clearly dictates otherwise. The word "such as" is used herein to mean, and is used interchangeably with, the phrase "such as but not limited to".
It should also be noted that in the devices, apparatuses, and methods of the present application, the components or steps may be decomposed and/or recombined. These decompositions and/or recombinations are to be considered as equivalents of the present application.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present application. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the application. Thus, the present application is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Furthermore, the description is not intended to limit embodiments of the application to the form disclosed herein. While a number of example aspects and embodiments have been discussed above, those of skill in the art will recognize certain variations, modifications, alterations, additions and sub-combinations thereof.

Claims (10)

1. An audio processing method applied to a wireless headset comprising an audio module, wherein the audio module stores audio files, the method comprising:
determining a control instruction for the audio file based on a wireless headset and/or an electronic device connected with the wireless headset, wherein the control instruction is used for controlling the wireless headset;
controlling the wireless headset based on the control instruction.
2. The audio processing method of claim 1, wherein the audio module comprises a headphone local equalizer, and further comprising, prior to the controlling the wireless headphone based on the control instruction:
determining an audio file to be played based on the control instruction and the audio file stored in the audio module;
wherein the controlling the wireless headset based on the control instruction comprises:
and controlling the earphone local equalizer to determine an equalization parameter corresponding to the audio file to be played based on the control instruction, so that the earphone local equalizer performs a spectrum equalization operation on the audio file to be played based on the equalization parameter, and controls the wireless earphone to play the audio file to be played after the spectrum equalization operation.
3. The audio processing method according to claim 2, wherein the determining the equalization parameter corresponding to the audio file to be played comprises:
determining target frequency response information of the earphone corresponding to the audio file to be played;
determining speaker path transfer function information of the wireless headset;
and determining the equalization parameter based on the earphone target frequency response information and the loudspeaker path transfer function information.
4. The audio processing method according to claim 3, wherein the determining of the target headphone frequency response information corresponding to the audio file to be played comprises:
and inputting the audio file to be played into a target frequency response model so as to determine the target frequency response information of the earphone.
5. The audio processing method according to claim 2, wherein the determining the equalization parameter corresponding to the audio file to be played comprises:
acquiring a user-defined balance parameter of a user;
and determining the balance parameters corresponding to the audio file to be played based on the self-defined balance parameters.
6. The audio processing method according to any of claims 1 to 5, wherein the control instruction comprises an audio switching instruction, and wherein the determining the control instruction for the audio file based on the wireless headset and/or an electronic device connected with the wireless headset comprises at least one of:
determining the audio switching instruction based on user motion manipulation information for the wireless headset;
determining the audio switching instruction based on instruction information of the user for the electronic device.
7. An audio processing method applied to an electronic device connected with a wireless headset, wherein the wireless headset comprises an audio module, and the audio module stores an audio file, the method comprising:
generating a control instruction for the audio file based on user operation information, wherein the control instruction is used for controlling the wireless headset;
and sending the control instruction to the wireless earphone so as to control the wireless earphone by using the control instruction.
8. A wireless headset, comprising:
the audio module stores audio files;
a determining module, configured to determine a control instruction for the audio file based on the wireless headset and/or an electronic device connected to the wireless headset, where the control instruction is used to control the wireless headset;
and the control module is used for controlling the wireless earphone based on the control instruction.
9. The wireless headset of claim 8, wherein the audio module comprises a local headset equalizer, and the local headset equalizer is configured to determine an equalization parameter corresponding to an audio file to be played, and perform a spectrum equalization operation on the audio file to be played based on the equalization parameter, wherein the audio file to be played is determined based on the audio file stored in the audio module.
10. A wireless headset according to claim 8 or 9, further comprising an extension interface connected to the audio module, the extension interface being adapted to write a new audio file to the audio module.
CN202110945332.7A 2021-08-17 2021-08-17 Audio processing method and wireless earphone Withdrawn CN113613142A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110945332.7A CN113613142A (en) 2021-08-17 2021-08-17 Audio processing method and wireless earphone

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110945332.7A CN113613142A (en) 2021-08-17 2021-08-17 Audio processing method and wireless earphone

Publications (1)

Publication Number Publication Date
CN113613142A true CN113613142A (en) 2021-11-05

Family

ID=78341052

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110945332.7A Withdrawn CN113613142A (en) 2021-08-17 2021-08-17 Audio processing method and wireless earphone

Country Status (1)

Country Link
CN (1) CN113613142A (en)

Similar Documents

Publication Publication Date Title
CN205232422U (en) Headset
CN107071648B (en) Sound playing adjusting system, device and method
CN1972524B (en) Music file replaying method and apparatus
US10893352B2 (en) Programmable interactive stereo headphones with tap functionality and network connectivity
US9525392B2 (en) System and method for dynamically adapting playback device volume on an electronic device
KR101251626B1 (en) Sound compensation service providing method for characteristics of sound system using smart device
EP3038255B1 (en) An intelligent volume control interface
CN109982231B (en) Information processing method, device and storage medium
KR20200085226A (en) Customized audio processing based on user-specific and hardware-specific audio information
US9847767B2 (en) Electronic device capable of adjusting an equalizer according to physiological condition of hearing and adjustment method thereof
US20190007765A1 (en) User customizable headphone system
US20230171539A1 (en) Audio output device, method for eliminating sound leakage, and storage medium (as amended)
TWM526238U (en) Electronic device capable of adjusting settings of equalizer according to user's age and audio playing device thereof
CN113596677A (en) Parameter determination method, wireless earphone, storage medium and electronic equipment
CN113518284A (en) Audio processing method, wireless headset and computer readable storage medium
KR101520799B1 (en) Earphone apparatus capable of outputting sound source optimized about hearing character of an individual
US10433081B2 (en) Consumer electronics device adapted for hearing loss compensation
CN114420158A (en) Model training method and device, and target frequency response information determining method and device
CN113613142A (en) Audio processing method and wireless earphone
WO2023051083A1 (en) Audio control method, electronic device, and audio playback system
KR20150049915A (en) Earphone apparatus having hearing character protecting function of an individual
CN111190568A (en) Volume adjusting method and device
CN114257910A (en) Audio processing method and device, computer readable storage medium and electronic equipment
TWI630828B (en) Personalized system of smart headphone device for user-oriented conversation and use method thereof
US20140355778A1 (en) Headphone device and control method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20211105

WW01 Invention patent application withdrawn after publication