CN111556406A - Audio processing method, audio processing device and earphone - Google Patents

Audio processing method, audio processing device and earphone Download PDF

Info

Publication number
CN111556406A
CN111556406A CN202010327239.5A CN202010327239A CN111556406A CN 111556406 A CN111556406 A CN 111556406A CN 202010327239 A CN202010327239 A CN 202010327239A CN 111556406 A CN111556406 A CN 111556406A
Authority
CN
China
Prior art keywords
audio
earphone
headset
playing
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010327239.5A
Other languages
Chinese (zh)
Other versions
CN111556406B (en
Inventor
韦伟才
邓海蛟
马健莹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Weimai Technology Co ltd
Original Assignee
Shenzhen Weimai Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Weimai Technology Co ltd filed Critical Shenzhen Weimai Technology Co ltd
Priority to CN202010327239.5A priority Critical patent/CN111556406B/en
Publication of CN111556406A publication Critical patent/CN111556406A/en
Application granted granted Critical
Publication of CN111556406B publication Critical patent/CN111556406B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones
    • H04R1/1091Details not provided for in groups H04R1/1008 - H04R1/1083

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Telephone Function (AREA)
  • Headphones And Earphones (AREA)

Abstract

The application is applicable to the technical field of earphones, and provides an audio processing method, an audio processing device, an earphone and a computer readable storage medium, wherein the audio processing method comprises the following steps: when a preset instruction is detected, determining an audio segment indicated by the preset instruction in a target audio; when the audio clip is played through an earphone, determining the segmentation nodes in the audio clip, and when the audio clip is played at any one of the segmentation nodes, pausing the playing of the audio clip; after the audio clip is paused to be played, collecting user voice through a microphone of the earphone and playing the user voice through a loudspeaker of the earphone; and after the user voice playing is finished, continuing to play the audio clip from the corresponding segmentation node. By the method, the problem that the existing earphone is single in function and difficult to adapt to requirements of users in different occasions can be solved.

Description

Audio processing method, audio processing device and earphone
Technical Field
The present application belongs to the field of earphone technology, and in particular, to an audio processing method, an audio processing apparatus, an earphone, and a computer-readable storage medium.
Background
Earphones are audio playing devices which are often used in human life. However, in daily use, the function of the earphone is often single, and the earphone can only be played according to the audio sent by the connected terminal, which is difficult to adapt to the requirements of users in different occasions.
Disclosure of Invention
The embodiment of the application provides an audio processing method, an audio processing device, an earphone and a computer readable storage medium, which can solve the problems that the existing earphone has single function and is difficult to adapt to the requirements of users in different occasions.
In a first aspect, an embodiment of the present application provides an audio processing method, including:
when a preset instruction is detected, determining an audio segment indicated by the preset instruction in a target audio;
when the audio clip is played through an earphone, determining the segmentation nodes in the audio clip, and when the audio clip is played at any one of the segmentation nodes, pausing the playing of the audio clip;
after the audio clip is paused to be played, collecting user voice through a microphone of the earphone and playing the user voice through a loudspeaker of the earphone;
and after the user voice playing is finished, continuing to play the audio clip from the corresponding segmentation node.
In a second aspect, an embodiment of the present application provides an audio processing apparatus, including:
the device comprises a first determining module, a second determining module and a third determining module, wherein the first determining module is used for determining an audio segment indicated by a preset instruction in target audio when the preset instruction is detected;
a second determining module, configured to determine a split node in the audio clip when the audio clip is played through an earphone, and pause playing the audio clip when the audio clip is played at any split node;
the processing module is used for collecting user voice through a microphone of the earphone and playing the user voice through a loudspeaker of the earphone after the audio clip is paused to be played;
and the playing module is used for continuously playing the audio clip from the corresponding segmentation node after the user voice playing is finished.
In a third aspect, an embodiment of the present application provides a headset, including a memory, a processor, a display, and a computer program stored in the memory and executable on the processor, wherein the processor implements the audio processing method according to the first aspect when executing the computer program.
In a fourth aspect, the present application provides a computer-readable storage medium, where a computer program is stored, and the computer program, when executed by a processor, implements the audio processing method as described in the first aspect.
In a fifth aspect, embodiments of the present application provide a computer program product, which, when run on a headset, causes the headset to perform the audio processing method described above in the first aspect.
Compared with the prior art, the embodiment of the application has the advantages that: in the embodiment of the application, when a preset instruction is detected, an audio clip indicated by the preset instruction can be determined in target audio, when the audio clip is played through an earphone, segmentation nodes in the audio clip are determined, and when the audio clip is played at any segmentation node, the audio clip is paused to collect user voice through a microphone of the earphone after the audio clip is played to each segmentation node, and the user voice is played through a loudspeaker of the earphone, so that a user is helped to listen to the content before each segmentation node in stages, and feedback can be performed on the target audio in each stage, for example, follow-up reading, repeat and the like can be performed, and the user's own voice can be heard through the earphone to be compared with the original sound of the target audio. Through the embodiment of the application, a user can follow and read the learning and other operations of the specific audio stage by stage through the earphone, the function of the earphone is expanded, the earphone can meet the requirements of various application scenes, and better use experience can be provided for the user.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a schematic flowchart of an audio processing method according to an embodiment of the present application;
fig. 2 is a schematic flowchart of step S103 according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of an audio processing apparatus according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of an earphone according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to" determining "or" in response to detecting ". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
Specifically, fig. 1 shows a flowchart of an audio processing method provided by an embodiment of the present application, which may be applied to a headset.
The type of the headset may be various, for example, the type of the headset may be a wired headset, a general bluetooth headset, or a True Wireless Stereo (TWS) bluetooth headset, etc.
The specific structure of the earphone can be determined according to actual needs. In some examples, the headset may include a left headset and/or a right headset, and when the headset includes a left headset and a right headset, the left headset and the right headset may be used separately or simultaneously, and a user may adjust a usage manner of the headset according to a requirement. Illustratively, the left earphone may include one or more of a left earphone housing, a left earphone button (e.g., a mechanical button or a touch button), a left earphone microphone, a left earphone audio electronics unit, a left earphone speaker, a left earphone antenna, and a left earphone battery, among others. The left earphone audio electronic unit may include an audio processing circuit, and the audio processing circuit may be configured to implement one or more of audio encoding and decoding, audio wireless transceiving, and other audio processing functions. The left earphone key can be arranged on the surface of the left earphone so as to be convenient for a user to contact, and one or more left earphone keys can be arranged; and components such as a left headphone microphone, a left headphone audio electronics unit, a left headphone speaker, a left headphone antenna, and a left headphone battery may be mounted inside the left headphone housing. And the structure of the right earphone can be the same as that of the left earphone or different from that of the left earphone. For example, in some cases, a right headset may include one or more components of a right headset housing, right headset keys (e.g., mechanical keys or touch keys), a right headset microphone, a right headset audio electronics unit, a right headset speaker, a right headset antenna, and a right headset battery, among others. The right earphone audio electronic unit may include an audio processing circuit, and the audio processing circuit may be configured to implement one or more of audio encoding and decoding, audio wireless transceiving, and other audio processing functions. Also, if the earphones include both a left earphone and a right earphone, in some examples, the left earphone and the right earphone may perform information transmission by a specific information transmission method (e.g., a wired transmission method or a wireless transmission method such as bluetooth transmission).
The specific structure of the earphone is not limited herein.
The audio processing method comprises the following steps:
step S101, when a preset instruction is detected, determining an audio segment indicated by the preset instruction in a target audio.
In this embodiment of the application, the preset instruction may instruct the earphone to play the audio clip in a preset form. For example, the preset instruction may be an instruction instructing the headset to perform an audio repeating operation. The audio repeating operation can repeat playing of the specific audio acquired by the earphone. In some embodiments, the specific audio may be a user voice or at least a portion of content in the captured audio clip, or the like.
In some examples, the preset instruction may be generated after receiving a specific operation of the preset application by the user, where the preset application may be run on a mobile terminal coupled to the headset. For example, the preset instruction may be generated after receiving a touch operation of a user on a specific virtual key on a display interface of the preset application. Of course, the specific operation may also be other operations besides the touch operation, and may be determined according to an actual application scenario. For example, the preset operation may be a pressing operation of a specific key on the headset by a user, a pressing operation of a physical key of the mobile terminal or the headset storage case coupled to the headset, or the like.
In the embodiment of the present application, there may be multiple ways of determining the audio segment indicated by the preset instruction in the target audio. For example, the audio piece may be determined according to a user's operation of a particular key on the headset; alternatively, the audio clip may be determined from specific information sent to the headset by a headset receiver coupled to the headset or a mobile terminal.
In some embodiments, the determining, when the preset instruction is detected, the audio segment indicated by the preset instruction in the target audio includes:
when a preset instruction is detected, acquiring second operation information and third operation information, wherein the second operation information is obtained according to the operation of a user on a second key on the earphone, and the third operation information is obtained according to the operation of the user on a third key on the earphone;
and taking the first node indicated by the second operation information as a start node of the audio segment, and taking the second node indicated by the third operation information as an end node of the audio segment.
In the embodiment of the application, the earphone may include a second key and a third key. The second key and the third key may be in the form of a physical key or a virtual touch key, respectively. Correspondingly, the second operation information may be information corresponding to a touch operation or a pressing operation of the second key by the user, and the third operation information may be information corresponding to a touch operation or a pressing operation of the third key by the user. In some cases, the first node indicated by the second operation information may be a currently selected node corresponding to the target audio when the user performs an operation on a second key on the headset; the second node indicated by the third operation information may be a currently selected node corresponding to the target audio when the user performs an operation on a third key on the earphone. In some examples, a user may determine the currently selected node through a preset application interface on a mobile terminal coupled to the headset, and send information of the currently selected node to the headset, and in addition, in the target video playing process, when it is detected that the user performs an operation on a second key on the headset, a corresponding currently playing node is determined in the target audio as a corresponding first node, and when it is detected that the user performs an operation on a third key on the headset, a corresponding currently playing node is determined in the target audio as a corresponding second node.
In some embodiments, the determining, when the preset instruction is detected, the audio segment indicated by the preset instruction in the target audio includes:
when a preset instruction is detected, determining an audio clip corresponding to the preset instruction in target audio according to a second instruction sent by an earphone storage box coupled with the earphone or the mobile terminal.
In this embodiment, a user may perform a specific operation on an earphone storage box or a mobile terminal coupled to the earphone, so as to instruct the earphone storage box or the mobile terminal to generate the second instruction command. The earphone storage box or the mobile terminal can be coupled with the earphone in various ways, for example, the mobile terminal or the earphone storage box can be coupled with the earphone through bluetooth connection. Further, the headset receiver and the mobile terminal may be paired with the headset, thereby establishing the coupling.
For example, the earphone storage box may include a fourth key and a fifth key. At this time, the earphone storage box may determine the audio segment by using a third node indicated by an operation of the user on a fourth key on the earphone storage box as a start node of the audio segment and by using a fourth node indicated by an operation of the user on a fifth key on the earphone storage box as an end node of the audio segment, and generate a second instruction including playback position information of the audio segment in the target audio. In this embodiment, the earphone storage box may include a storage box circuit to detect an operation of a key on the earphone storage box, connect with bluetooth of a terminal such as an earphone, and perform information processing such as information transmission with the terminal such as the earphone through the storage box circuit.
In the embodiment of the present application, for example, the mobile terminal coupled with the headset may be a mobile phone, a tablet computer, a notebook computer, or the like. A preset application, for example, a specific audio playing application, may be run on the mobile terminal, and at this time, the user may determine, through the preset application interface, the playing position of the audio clip in the target audio, so as to generate the second instruction in the mobile terminal.
Step S102, when the audio clip is played through the earphone, the segmentation nodes in the audio clip are determined, and when the audio clip is played to any segmentation node, the audio clip is paused to be played.
In the embodiment of the present application, in the process of playing the audio clip, the playing of the audio clip may be paused at each of the segmentation nodes. The determination method of the segmentation node may be various. In some examples, each segmentation node may be determined by a preset circuit chip in the headset and a preset intelligent sentence-breaking algorithm. For example, a preset circuit chip in the headset may include a Digital Signal Processor (DSP), and the preset circuit chip may execute the preset intelligent sentence-breaking algorithm to determine the segmentation node. The selection of the preset intelligent sentence-breaking algorithm can be determined according to the requirements of an actual application scene. For example, the preset intelligent sentence-breaking algorithm may be a Voice Activity Detection (VAD) algorithm, or a pause point is determined by detecting a play volume change condition of the target audio to determine the segmentation node, and so on.
Furthermore, in some embodiments, it may also be that a mobile terminal coupled to the headset transmits information of the split node to the headset to enable the headset to determine the split node.
In the embodiment of the application, by determining the segmentation nodes in the audio clip and pausing the playing of the audio clip when the audio clip is played to any of the segmentation nodes, a user can listen to and perform subsequent feedback and the like on the content before each segmentation node respectively, for example, the user can follow up and read the content before each segmentation node in the subsequent process.
Step S103, after the audio clip is paused to be played, collecting the voice of the user through a microphone of the earphone, and playing the voice of the user through a loudspeaker of the earphone.
In the embodiment of the application, after the audio clip is paused, the user voice can be collected through the earphone and played, at this time, in scenes such as language learning, the user can read the played content before pausing, and learn the reading result through the earphone, and know the pronunciation of the user in time.
In some embodiments, after the user voice is collected by the microphone of the earphone, the user voice may be subjected to audio processing such as noise reduction, gain adjustment, and reverberation, and then output to the corresponding speaker for playing, so as to improve the playing effect.
In some embodiments, the headset may include a left headset and a right headset, in which case the collecting of the user speech through the microphone of the headset and the playing of the user speech through the speaker of the headset may include:
collecting a first user voice through a microphone of a left earphone in the earphones, and playing the first user voice through a loudspeaker in the left earphone;
and collecting a second user voice through a microphone of a right earphone in the earphones, and playing the second user voice through a loudspeaker in the right earphone.
At this moment, the user voices respectively played by the left earphone and the right earphone in the earphones can form a stereo effect, so that the voice playing effect is improved, and the use experience of the user is improved.
In some embodiments, the headset is a true wireless stereo TWS bluetooth headset.
In the embodiment of the application, the earphone is a real Wireless Stereo (TWS) bluetooth earphone, so that the earphone can transmit information through bluetooth connection, thereby realizing a real Wireless structure. And, the TWS bluetooth headset may include a left headset and a right headset, and the left headset and the right headset of the TWS bluetooth headset may be used together or may be used separately, and the left headset and the right headset may also perform information transmission through bluetooth connection, thereby implementing mutual control between the left headset and the right headset, and the like.
Therefore, the audio processing method in the embodiment of the application is realized through the TWS Bluetooth headset, so that a more convenient and free headset using environment can be provided for a user, the user can listen to audio conveniently and perform voice interaction through the headset, for example, the user voice is spoken, and whether the content spoken by the user per se meets the expectation of the user per se or not, for example, whether the corresponding playing content can be well simulated or not can be known through the TWS Bluetooth headset. Meanwhile, the user voice of the left sound channel and the user voice of the right sound channel can be acquired through the left earphone and the right earphone of the TWS Bluetooth earphone respectively, so that when the user voices are played, a user can hear a stereo effect.
In some embodiments, said collecting user speech through a microphone of said headset and playing said user speech through a speaker of said headset after pausing of playing said audio clip comprises:
step S201, after the audio clip is paused to be played, collecting user voice in real time through a microphone of the TWS Bluetooth headset;
step S202, transmitting the collected user voice to a preset chip, and acquiring the user voice processed by the preset chip;
step S203, transmitting the processed user voice to a loudspeaker of the TWS Bluetooth headset;
and step S204, playing the processed user voice in real time through a loudspeaker of the TWS Bluetooth headset.
In this embodiment of the application, the preset chip may be located on the TWS bluetooth headset, or may be located on other terminals such as a mobile terminal or a headset storage box coupled to the TWS bluetooth headset. If the preset chip is located on a mobile terminal coupled with the TWS Bluetooth headset or other terminals such as a headset storage box, the headset can transmit the collected user voice to the preset chip through Bluetooth connection, and acquire the user voice after the processing of the preset chip. The audio processing of the preset chip on the user voice can be set according to an actual scene, and is not limited herein. For example, the preset chip may perform operations such as noise reduction, reverberation, and gain adjustment on the user voice.
Compared with the prior art that after the existing user speaks the content, the user plays the related user voice to the user again, in the embodiment of the application, the user voice can be collected in real time through the microphone of the TWS Bluetooth headset, and the user voice is played and processed in real time through the loudspeaker of the TWS Bluetooth headset, at the moment, the user can hear the content spoken by the user in time through the TWS Bluetooth headset, the user voice is returned in time, the efficiency of listening to the related content by the user is improved, and meanwhile, the time cost can be saved for the user.
In some embodiments, before the real-time collecting of the user voice through the microphone of the TWS bluetooth headset, the method further comprises:
the real-time earreturning function of the TWS Bluetooth headset is set according to first operation information, or a received first indication instruction, or a detected voice instruction, or a detected preset gesture, wherein the first operation information is information obtained according to operation of a user on a first key on the TWS Bluetooth headset, the first indication instruction is sent by a headset storage box or a mobile terminal coupled with the TWS Bluetooth headset, and the real-time earreturning function indicates that after playing of the audio clip is suspended, user voice is collected in real time through a microphone of the TWS Bluetooth headset, and the processed user voice is played in real time through a loudspeaker of the TWS Bluetooth headset.
In the embodiment of the application, the real-time ear return function can be preset to indicate that the microphone of the TWS Bluetooth headset realizes real-time acquisition of user voice at a specific time, and the user voice after processing is played in real time through the loudspeaker of the TWS Bluetooth headset. The real-time ear return function can be set according to the operation of a user on a first key of the TWS Bluetooth headset, or can be set through a voice instruction detected by a voice recognition module on the TWS Bluetooth headset or through a preset gesture detected by a gesture recognition module on the TWS Bluetooth headset.
It should be noted that, in the embodiment of the present application, the first key may be the same as or different from the second key or the third key; if the first key is the same as the second key or the third key, the user operation corresponding to the first operation information may be different from the user operation corresponding to the second operation information or the third operation information. For example, the first operation information may correspond to a long press operation by a user, and the second operation information or the third operation information may correspond to a click operation by the user, and so on. The specific arrangement modes of the first key, the second key and the third key can be various.
In addition, a first instruction can also be sent to the TWS bluetooth headset through a headset storage box or a mobile terminal coupled with the TWS bluetooth headset to instruct the TWS bluetooth headset to set the real-time ear return function.
In some embodiments, the first indication instruction may be sent to any one of the TWS bluetooth headsets (i.e., the left headset or the right headset), and the one headset sends related indication information indicating that the real-time earreturn function is set to the other headset (i.e., the corresponding right headset or the left headset) through bluetooth communication. At this time, through bluetooth communication, real-time ear return functions of a left earphone and a right earphone in the TWS bluetooth headset may be simultaneously turned on or simultaneously turned off.
And step S104, after the user voice playing is finished, continuing to play the audio clip from the corresponding segmentation node.
In the embodiment of the application, the collection of the user voice through the microphone of the earphone can be paused and executed at each segmentation node respectively, the user voice operation is played through the loudspeaker of the earphone, and the playing is continued until the playing of the audio clip is completed. Specifically, an instruction indicating to continue playing the audio clip may be generated by a preset chip in the earphone, or an instruction indicating to continue playing the audio clip may be sent to the earphone by a mobile terminal coupled to the earphone, so that the earphone continues playing the audio clip.
Wherein in some embodiments, further comprising:
and sending the user voice collected by the microphone to a mobile terminal coupled with the earphone for storage.
At this time, the user voice may be stored in the corresponding mobile terminal, so that the user may repeatedly play the stored user voice through the mobile terminal subsequently. In some embodiments, after the user voice is collected through the microphone of the earphone, the user voice can be compressed to obtain a compressed audio file, and the compressed audio file is sent to the mobile terminal coupled with the earphone for storage, so that the sent data volume is reduced, and the transmission efficiency is improved.
In some embodiments, there may be a headset system that may include a headset and a headset receiver coupled with the headset, the headset for:
when a preset instruction is detected, determining an audio clip indicated by the preset instruction in target audio according to target indication information sent by the earphone storage box;
when the audio clip is played through an earphone, determining the segmentation nodes in the audio clip, and when the audio clip is played at any one of the segmentation nodes, pausing the playing of the audio clip;
after the audio clip is paused to be played, collecting user voice through a microphone of the earphone and playing the user voice through a loudspeaker of the earphone;
after the user voice playing is finished, continuing to play the audio clip from the corresponding segmentation node;
the earphone receiver is used for:
and generating the target indication information according to fourth operation information and/or fifth operation information, and sending the target indication information to the earphone, wherein the target indication information may indicate a playing position of the audio clip in the target audio.
Specifically, fourth operation information may be acquired through the earphone accommodating box, and a third node corresponding to the fourth operation information is used as a start node of the audio clip, where the fourth operation information is information obtained according to an operation of a fourth key on the earphone accommodating box by a user;
and acquiring fifth operation information, and taking a fourth node corresponding to the fifth operation information as an end node of the audio clip, wherein the fifth operation information is obtained according to the operation of a user on a fifth key on the earphone storage box.
In some embodiments, the earphone pod further comprises a sixth key. Specifically, according to the operation of the sixth key on the earphone accommodating box by the user, a playing instruction indicating to play the target audio may be generated, or a playing pause instruction indicating to pause the target audio may be generated. The playing instruction can be sent to the earphone as the preset instruction.
A specific example of a specific implementation of the embodiments of the present application is described below.
In some embodiments, in a preset application on the smartphone coupled to the headset, a user may open a target audio and may click a virtual key indicating to start playing to indicate that the preset application plays the target audio, and at this time, audio information of the target audio may be sent from the smartphone to the headset through bluetooth communication and played through the headset. Further, after the smart phone displays the playing interface of the target audio, a user may select a start node and an end node of the audio clip by dragging a progress bar in the playing interface. In addition, in the process of playing the target audio, the user may operate a second key on the earphone, or operate a fourth key of the earphone storage box, so as to select a start node of the audio clip; in the target audio playing process, a user may operate a third key on the earphone, or operate a fifth key of the earphone storage box to select an end node of the audio clip.
In addition, the user can set the real-time ear return function through the preset application. Specifically, the user may select an option of enabling an instruction to start the real-time ear-return function from preset setting options of the preset application, for example, may select an option of "intelligent sentence break and follow-up reading". After the user setting is completed, the preset application may generate a first instruction according to the setting, and send the first instruction to the headset to instruct the headset to collect the user voice in real time through a microphone of the TWS bluetooth headset after the audio clip is paused to be played, and play the processed user voice in real time through a speaker of the TWS bluetooth headset.
Of course, other playing functions may also be set in the preset application. For example, a user may set an option that can select to enable an intelligent sentence break from preset setting options of the preset application; at this time, the preset application may generate indication information indicating an intelligent sentence break, and send the indication information indicating the intelligent sentence break to the earphone, so as to indicate that the audio clip is paused to be played when being played at any one of the segmentation nodes, and after the audio clip is paused to be played for N seconds, the audio clip is continuously played from the corresponding segmentation node.
In addition, the user may also re-select the audio clip. For example, after the start node of the audio segment indicated by the preset instruction is determined in the target audio, if a user's designated operation (such as a double-click operation) on the second key on the earphone is detected, the audio segment is reset according to a third node corresponding to the user's designated operation on the second key on the earphone, where the reset start node of the audio segment is the third node.
It should be noted that the above-mentioned example is only one specific implementation of the embodiments of the present application, and is not meant to limit the present application.
In the embodiment of the application, when a preset instruction is detected, an audio clip indicated by the preset instruction can be determined in target audio, when the audio clip is played through an earphone, segmentation nodes in the audio clip are determined, and when the audio clip is played at any segmentation node, the audio clip is paused to collect user voice through a microphone of the earphone after the audio clip is played to each segmentation node, and the user voice is played through a loudspeaker of the earphone, so that a user is helped to listen to the content before each segmentation node in stages, and feedback can be performed on the target audio in each stage, for example, follow-up reading, repeat and the like can be performed, and the user's own voice can be heard through the earphone to be compared with the original sound of the target audio. Through the embodiment of the application, a user can follow and read the learning and other operations of the specific audio stage by stage through the earphone, the function of the earphone is expanded, the earphone can meet the requirements of various application scenes, and better use experience can be provided for the user.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Fig. 3 shows a block diagram of an audio processing apparatus provided in an embodiment of the present application, which corresponds to the above-described audio processing method in the above embodiment, and only shows portions related to the embodiment of the present application for convenience of description.
Referring to fig. 3, the audio processing apparatus 3 includes:
the first determining module 301 is configured to determine, when a preset instruction is detected, an audio segment indicated by the preset instruction in a target audio;
a second determining module 302, configured to determine a split node in the audio segment when the audio segment is played through a headphone, and pause playing the audio segment when the audio segment is played at any split node;
a processing module 303, configured to collect a user voice through a microphone of the headset after the audio clip is paused to be played, and play the user voice through a speaker of the headset;
a playing module 304, configured to continue to play the audio clip from the corresponding segmentation node after the user speech is played.
Optionally, the headset is a true wireless stereo TWS bluetooth headset.
Optionally, the processing module 303 specifically includes:
the acquisition unit is used for acquiring user voice in real time through a microphone of the TWS Bluetooth headset after the audio clip is paused to be played;
the first processing unit is used for transmitting the collected user voice to a preset chip and acquiring the user voice processed by the preset chip;
the transmission unit is used for transmitting the processed user voice to a loudspeaker of the TWS Bluetooth headset;
and the playing unit is used for playing the processed user voice in real time through a loudspeaker of the TWS Bluetooth headset.
Optionally, the audio processing apparatus 3 further includes:
the setting module is used for setting a real-time ear return function of the TWS Bluetooth headset according to first operation information, or according to a received first indication instruction, or according to a detected voice instruction, or according to a detected preset gesture, wherein the first operation information is information obtained according to operation of a user on a first key on the TWS Bluetooth headset, the first indication instruction is sent by a headset storage box or a mobile terminal coupled with the TWS Bluetooth headset, the real-time ear return function instruction is used for collecting user voice in real time through a microphone of the TWS Bluetooth headset after playing of the audio clip is paused, and the processed user voice is played in real time through a loudspeaker of the TWS Bluetooth headset.
Optionally, the first determining module 301 specifically includes:
the earphone comprises an acquisition unit, a processing unit and a control unit, wherein the acquisition unit is used for acquiring second operation information and third operation information when a preset instruction is detected, the second operation information is obtained according to the operation of a user on a second key on the earphone, and the third operation information is obtained according to the operation of the user on a third key on the earphone;
and the second processing unit is used for taking the first node indicated by the second operation information as a starting node of the audio clip and taking the second node indicated by the third operation information as an ending node of the audio clip.
Optionally, the first determining module 301 is specifically configured to:
when a preset instruction is detected, determining an audio clip corresponding to the preset instruction in target audio according to a second instruction sent by an earphone storage box coupled with the earphone or the mobile terminal.
Optionally, the audio processing apparatus 3 further includes:
and the sending module is used for sending the user voice collected by the microphone to a mobile terminal coupled with the earphone for storage.
In the embodiment of the application, when a preset instruction is detected, an audio clip indicated by the preset instruction can be determined in target audio, when the audio clip is played through an earphone, segmentation nodes in the audio clip are determined, and when the audio clip is played at any segmentation node, the audio clip is paused to collect user voice through a microphone of the earphone after the audio clip is played to each segmentation node, and the user voice is played through a loudspeaker of the earphone, so that a user is helped to listen to the content before each segmentation node in stages, and feedback can be performed on the target audio in each stage, for example, follow-up reading, repeat and the like can be performed, and the user's own voice can be heard through the earphone to be compared with the original sound of the target audio. Through the embodiment of the application, a user can follow and read the learning and other operations of the specific audio stage by stage through the earphone, the function of the earphone is expanded, the earphone can meet the requirements of various application scenes, and better use experience can be provided for the user.
It should be noted that, for the information interaction, execution process, and other contents between the above-mentioned devices/units, the specific functions and technical effects thereof are based on the same concept as those of the embodiment of the method of the present application, and specific reference may be made to the part of the embodiment of the method, which is not described herein again.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned functions may be distributed as different functional units and modules according to needs, that is, the internal structure of the apparatus may be divided into different functional units or modules to implement all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
Fig. 4 is a schematic structural diagram of an earphone 4 according to an embodiment of the present application. As shown in fig. 4, the headphones 4 of this embodiment include: at least one processor 40 (only one is shown in fig. 4), a memory 41, and a computer program 42 stored in the memory 41 and executable on the at least one processor 40, wherein the processor 40 implements the steps in any of the various interface calling method embodiments when the computer program 42 is executed.
It will be appreciated by those skilled in the art that fig. 4 is merely an example of the headset 4 and does not constitute a limitation of the headset 4 and may include more or less components than shown, or some components in combination, or different components, such as an input device, an output device, a network access device, etc. The input device may include a microphone, a touch pad, a camera, and the like, and the output device may include a display, a speaker, and the like.
The Processor 40 may be a Central Processing Unit (CPU), and the Processor 40 may also be other general purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 41 may in some embodiments be an internal storage unit of the headset 4, such as a hard disk or a memory of the headset 4. In other embodiments, the memory 41 may be an external storage device of the earphone 4, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), or the like, provided on the earphone 4. Further, the memory 41 may include both an internal storage unit and an external storage device of the earphone 4. The memory 41 is used for storing an operating system, an application program, a Boot Loader (Boot Loader), data, and other programs, such as program codes of the computer programs. The above-mentioned memory 41 may also be used to temporarily store data that has been output or is to be output.
In addition, although not shown, the headset 4 may further include a network connection module, such as a bluetooth module Wi-Fi module, a cellular network module, and the like, which will not be described herein.
In this embodiment, when the processor 40 executes the computer program 42 to implement the steps in any of the audio processing method embodiments, when a preset instruction is detected, an audio segment indicated by the preset instruction may be determined in a target audio, when the audio segment is played through an earphone, a division node in the audio segment is determined, and when the audio segment is played at any one of the division nodes, the playing of the audio segment is suspended, so that after the audio segment is played at each division node, a user voice is collected through a microphone of the earphone, and the user voice is played through a speaker of the earphone, thereby helping a user to listen to the content before each division node in stages, and feedback may be performed on the target audio of each stage, for example, follow-up, repeat, and the like may be heard, and the user's own voice may be received through the earphone, to compare with the original sound of the target audio. Through the embodiment of the application, a user can follow and read the learning and other operations of the specific audio stage by stage through the earphone, the function of the earphone is expanded, the earphone can meet the requirements of various application scenes, and better use experience can be provided for the user.
The embodiments of the present application further provide a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the computer program implements the steps in the above method embodiments.
Embodiments of the present application provide a computer program product, which when running on a headset, enables the headset to implement the steps in the above-mentioned method embodiments when executed.
The integrated unit may be stored in a computer-readable storage medium if it is implemented in the form of a software functional unit and sold or used as a separate product. Based on such understanding, all or part of the processes in the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium and can implement the steps of the embodiments of the methods described above when the computer program is executed by a processor. The computer program includes computer program code, and the computer program code may be in a source code form, an object code form, an executable file or some intermediate form. The computer-readable medium may include at least: any entity or device capable of carrying computer program code to a photographing apparatus/terminal apparatus, a recording medium, computer Memory, Read-Only Memory (ROM), random-access Memory (RAM), an electrical carrier signal, a telecommunications signal, and a software distribution medium. Such as a usb-disk, a removable hard disk, a magnetic or optical disk, etc. In certain jurisdictions, computer-readable media may not be an electrical carrier signal or a telecommunications signal in accordance with legislative and patent practice.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/network device and method may be implemented in other ways. For example, the above-described apparatus/network device embodiments are merely illustrative, and for example, the division of the above modules or units is only one logical function division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
The above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. An audio processing method, comprising:
when a preset instruction is detected, determining an audio segment indicated by the preset instruction in a target audio;
when the audio clip is played through an earphone, determining the segmentation nodes in the audio clip, and when the audio clip is played at any one of the segmentation nodes, pausing the playing of the audio clip;
after the audio clip is paused to be played, collecting user voice through a microphone of the earphone and playing the user voice through a loudspeaker of the earphone;
and after the user voice playing is finished, continuing to play the audio clip from the corresponding segmentation node.
2. The audio processing method of claim 1, wherein the headset is a true wireless stereo TWS bluetooth headset.
3. The audio processing method of claim 2, wherein said collecting user speech through a microphone of the headset and playing the user speech through a speaker of the headset after pausing the playing of the audio clip comprises:
after the audio clip is paused to be played, collecting user voice in real time through a microphone of the TWS Bluetooth headset;
transmitting the collected user voice to a preset chip, and acquiring the user voice processed by the preset chip;
transmitting the processed user voice to a speaker of the TWS Bluetooth headset;
and playing the processed user voice in real time through a loudspeaker of the TWS Bluetooth earphone.
4. The audio processing method of claim 3, prior to capturing user speech in real time by a microphone of the TWS Bluetooth headset, further comprising:
the real-time earreturning function of the TWS Bluetooth headset is set according to first operation information, or a received first indication instruction, or a detected voice instruction, or a detected preset gesture, wherein the first operation information is information obtained according to operation of a user on a first key on the TWS Bluetooth headset, the first indication instruction is sent by a headset storage box or a mobile terminal coupled with the TWS Bluetooth headset, and the real-time earreturning function indicates that after playing of the audio clip is suspended, user voice is collected in real time through a microphone of the TWS Bluetooth headset, and the processed user voice is played in real time through a loudspeaker of the TWS Bluetooth headset.
5. The audio processing method according to claim 1, wherein the determining, when the preset instruction is detected, the audio segment indicated by the preset instruction in the target audio comprises:
when a preset instruction is detected, acquiring second operation information and third operation information, wherein the second operation information is obtained according to the operation of a user on a second key on the earphone, and the third operation information is obtained according to the operation of the user on a third key on the earphone;
and taking the first node indicated by the second operation information as a start node of the audio segment, and taking the second node indicated by the third operation information as an end node of the audio segment.
6. The audio processing method according to claim 1, wherein the determining, when the preset instruction is detected, the audio segment indicated by the preset instruction in the target audio comprises:
when a preset instruction is detected, determining an audio clip corresponding to the preset instruction in target audio according to a second instruction sent by an earphone storage box coupled with the earphone or the mobile terminal.
7. The audio processing method of any of claims 1 to 6, further comprising:
and sending the user voice collected by the microphone to a mobile terminal coupled with the earphone for storage.
8. An audio processing apparatus, comprising:
the device comprises a first determining module, a second determining module and a third determining module, wherein the first determining module is used for determining an audio segment indicated by a preset instruction in target audio when the preset instruction is detected;
a second determining module, configured to determine a split node in the audio clip when the audio clip is played through an earphone, and pause playing the audio clip when the audio clip is played at any split node;
the processing module is used for collecting user voice through a microphone of the earphone and playing the user voice through a loudspeaker of the earphone after the audio clip is paused to be played;
and the playing module is used for continuously playing the audio clip from the corresponding segmentation node after the user voice playing is finished.
9. A headset comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the audio processing method according to any of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the audio processing method according to any one of claims 1 to 7.
CN202010327239.5A 2020-04-23 2020-04-23 Audio processing method, audio processing device and earphone Active CN111556406B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010327239.5A CN111556406B (en) 2020-04-23 2020-04-23 Audio processing method, audio processing device and earphone

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010327239.5A CN111556406B (en) 2020-04-23 2020-04-23 Audio processing method, audio processing device and earphone

Publications (2)

Publication Number Publication Date
CN111556406A true CN111556406A (en) 2020-08-18
CN111556406B CN111556406B (en) 2022-04-22

Family

ID=72007676

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010327239.5A Active CN111556406B (en) 2020-04-23 2020-04-23 Audio processing method, audio processing device and earphone

Country Status (1)

Country Link
CN (1) CN111556406B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112684999A (en) * 2020-12-23 2021-04-20 中国人民解放军战略支援部队信息工程大学 Follow-reading mode voice acquisition method, system, equipment and storage medium

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1532832A (en) * 2003-03-25 2004-09-29 余晓冬 Method of layered positioning audio frequency data stream and language study machine using said method
CN102142271A (en) * 2010-01-29 2011-08-03 朱友平 Handheld multimedia player for synchronously displaying waveform and repeating method
CN102460577A (en) * 2009-05-13 2012-05-16 李斗汉 Multimedia file playing method and multimedia player
CN104575125A (en) * 2013-10-10 2015-04-29 北大方正集团有限公司 Double-audio-frequency rereading method and device
CN105491482A (en) * 2015-11-20 2016-04-13 广东欧珀移动通信有限公司 Voice transmission method and voice transmission device
CN106851452A (en) * 2016-01-02 2017-06-13 音来多(开曼)控股有限公司 The control method and wireless headset of wireless headset
US20180261117A1 (en) * 2017-03-10 2018-09-13 SmartNoter Inc. System and method of producing and providing user specific educational digital media modules
CN108600892A (en) * 2018-06-15 2018-09-28 歌尔科技有限公司 A kind of upgrade method, device, wireless headset, TWS earphones and charging box
CN109151647A (en) * 2018-10-29 2019-01-04 歌尔科技有限公司 Interaction control method, device, Earphone box and the storage medium of Earphone box
CN109451473A (en) * 2018-10-15 2019-03-08 倬韵科技(深圳)有限公司 The method and system of true wireless stereo Bluetooth earphone and the pairing of bluetooth playback equipment
CN109814798A (en) * 2019-01-17 2019-05-28 Oppo广东移动通信有限公司 Ear returns function control method, device and mobile terminal
CN110166871A (en) * 2019-05-31 2019-08-23 歌尔科技有限公司 Earphone charging box, TWS earphone, work state switching method and storage medium
CN110191391A (en) * 2019-07-24 2019-08-30 恒玄科技(上海)有限公司 Charging box, earphone external member and communication means
CN110225427A (en) * 2019-05-11 2019-09-10 出门问问信息科技有限公司 Earphone charging box and its data transmission method, earphone
CN110278501A (en) * 2018-03-15 2019-09-24 晶統電子股份有限公司 A kind of earphone with to outside sound source acquisition and electron process control output
CN209845255U (en) * 2019-07-03 2019-12-24 罗娟 Intelligent learning earphone
CN210298030U (en) * 2019-09-26 2020-04-10 广州由我科技股份有限公司 Multifunctional charging bin set of TWS (two way communication) Bluetooth headset

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1532832A (en) * 2003-03-25 2004-09-29 余晓冬 Method of layered positioning audio frequency data stream and language study machine using said method
CN102460577A (en) * 2009-05-13 2012-05-16 李斗汉 Multimedia file playing method and multimedia player
CN102142271A (en) * 2010-01-29 2011-08-03 朱友平 Handheld multimedia player for synchronously displaying waveform and repeating method
CN104575125A (en) * 2013-10-10 2015-04-29 北大方正集团有限公司 Double-audio-frequency rereading method and device
CN105491482A (en) * 2015-11-20 2016-04-13 广东欧珀移动通信有限公司 Voice transmission method and voice transmission device
CN106851452A (en) * 2016-01-02 2017-06-13 音来多(开曼)控股有限公司 The control method and wireless headset of wireless headset
US20180261117A1 (en) * 2017-03-10 2018-09-13 SmartNoter Inc. System and method of producing and providing user specific educational digital media modules
CN110278501A (en) * 2018-03-15 2019-09-24 晶統電子股份有限公司 A kind of earphone with to outside sound source acquisition and electron process control output
CN108600892A (en) * 2018-06-15 2018-09-28 歌尔科技有限公司 A kind of upgrade method, device, wireless headset, TWS earphones and charging box
CN109451473A (en) * 2018-10-15 2019-03-08 倬韵科技(深圳)有限公司 The method and system of true wireless stereo Bluetooth earphone and the pairing of bluetooth playback equipment
CN109151647A (en) * 2018-10-29 2019-01-04 歌尔科技有限公司 Interaction control method, device, Earphone box and the storage medium of Earphone box
CN109814798A (en) * 2019-01-17 2019-05-28 Oppo广东移动通信有限公司 Ear returns function control method, device and mobile terminal
CN110225427A (en) * 2019-05-11 2019-09-10 出门问问信息科技有限公司 Earphone charging box and its data transmission method, earphone
CN110166871A (en) * 2019-05-31 2019-08-23 歌尔科技有限公司 Earphone charging box, TWS earphone, work state switching method and storage medium
CN209845255U (en) * 2019-07-03 2019-12-24 罗娟 Intelligent learning earphone
CN110191391A (en) * 2019-07-24 2019-08-30 恒玄科技(上海)有限公司 Charging box, earphone external member and communication means
CN210298030U (en) * 2019-09-26 2020-04-10 广州由我科技股份有限公司 Multifunctional charging bin set of TWS (two way communication) Bluetooth headset

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112684999A (en) * 2020-12-23 2021-04-20 中国人民解放军战略支援部队信息工程大学 Follow-reading mode voice acquisition method, system, equipment and storage medium

Also Published As

Publication number Publication date
CN111556406B (en) 2022-04-22

Similar Documents

Publication Publication Date Title
US20200186114A1 (en) Audio Signal Adjustment Method, Storage Medium, and Terminal
EP3621068A1 (en) Portable smart voice interaction control device, method and system
EP3629561A1 (en) Data transmission method and system, and bluetooth headphone
CN101459717B (en) Wireless terminal and method for implementing multi-channel multiplexing
WO2019033987A1 (en) Prompting method and apparatus, storage medium, and terminal
CN109151789B (en) Translation method, device and system and Bluetooth headset
US20200045159A1 (en) Method for Call Processing and Electronic Device
KR20110054609A (en) Method and apparatus for remote controlling of bluetooth device
CN109360549B (en) Data processing method, wearable device and device for data processing
CN109067965B (en) Translation method, translation device, wearable device and storage medium
CN107371102B (en) Audio playing volume control method and device, storage medium and mobile terminal
CN107633849B (en) Bluetooth device volume adjusting method, device and computer readable storage medium
CN112786070B (en) Audio data processing method and device, storage medium and electronic equipment
CN111556406B (en) Audio processing method, audio processing device and earphone
CN113992965A (en) Low-delay transmission method and system
US20200202861A1 (en) Electronic device controlling system, voice output device, and methods therefor
CN104851441A (en) Method of realizing karaoke, device and home audio
CN112259076A (en) Voice interaction method and device, electronic equipment and computer readable storage medium
KR101442027B1 (en) Sound processing system to recognize earphones for portable devices using sound patterns, mathod for recognizing earphone for portable devices using sound patterns, and mathod for sound processing using thereof
CN107124512B (en) The switching method and apparatus of audio-frequency play mode
CN106293607B (en) Method and system for automatically switching audio output modes
CN105072243A (en) Incoming call prompting method and apparatus
CN114639392A (en) Audio processing method and device, electronic equipment and storage medium
US11120810B2 (en) Recording device
CN113299309A (en) Voice translation method and device, computer readable medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant