CN116264655A - Earphone control method, device and system and computer readable storage medium - Google Patents

Earphone control method, device and system and computer readable storage medium Download PDF

Info

Publication number
CN116264655A
CN116264655A CN202111532671.9A CN202111532671A CN116264655A CN 116264655 A CN116264655 A CN 116264655A CN 202111532671 A CN202111532671 A CN 202111532671A CN 116264655 A CN116264655 A CN 116264655A
Authority
CN
China
Prior art keywords
current
state
earphone
user
terminal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111532671.9A
Other languages
Chinese (zh)
Inventor
黄磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202111532671.9A priority Critical patent/CN116264655A/en
Publication of CN116264655A publication Critical patent/CN116264655A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0316Speech enhancement, e.g. noise reduction or echo cancellation by changing the amplitude
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones
    • H04R1/1083Reduction of ambient noise
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Abstract

The application provides a headset control method, device and system and a computer readable storage medium. In an embodiment, the method is applied to a system including a terminal and an earphone, the terminal is connected with the earphone, and the earphone is in a wearing state, and includes: the terminal stores the volume value manually adjusted by the user under different environmental noises in a dialogue state and/or a non-dialogue state; the terminal determines the current environmental noise; the terminal determines the current state of the user; the terminal determines a currently matched volume value according to the current state and the current environmental noise; the earphone plays according to the current matched volume value. Therefore, according to the technical scheme provided by the embodiment of the application, the volume of the earphone is adjusted through the historical volume setting of the user, so that the expectation of the user on volume adjustment can be better met, and the user is not required to manually and repeatedly adjust the volume setting to adapt to the current environment.

Description

Earphone control method, device and system and computer readable storage medium
Technical Field
The present disclosure relates to the field of communications technologies, and in particular, to a method, an apparatus, a system, and a computer readable storage medium for controlling headphones.
Background
Users who use headphones currently are more and more, after the headphones are connected with a mobile terminal such as a mobile phone, in the process of using the headphones, the users can manually adjust the volume of the headphones through physical keys and virtual keys arranged on the mobile terminal due to different sensitivity degrees of different users to the volume, so that the requirements of different environmental noise on the volume are met.
However, manual adjustment increases the operational complexity of the user, which in turn results in a poor user experience with headphones.
Disclosure of Invention
The embodiment of the application provides a headset control method, device and system and a computer readable storage medium, wherein the volume of a headset is adjusted through the historical volume setting of a user, so that the expectation of the user on volume adjustment can be better met, and the user is not required to manually and repeatedly adjust the volume setting to adapt to the current environment.
In a first aspect, an embodiment of the present application provides a method for controlling an earphone, where the method is applied to a system including a terminal and an earphone, the terminal is connected to the earphone, and the earphone is in a wearing state, and the method includes: the terminal stores the volume value manually adjusted by the user under different environmental noises in a dialogue state and/or a non-dialogue state; the terminal determines the current environmental noise; the terminal determines the current state of the user; the terminal determines a currently matched volume value according to the current state and the current environmental noise; the earphone plays according to the current matched volume value.
In this embodiment, the volume of the earphone is adjusted through the historical volume setting of the user, so that the expectation of the user on volume adjustment can be better met, and the user does not need to manually and repeatedly adjust the volume setting to adapt to the current environment.
In addition, if the user needs to adjust the volume, the volume of the earphone is not required to be adjusted repeatedly because the volume is adjusted on the basis of the historical volume setting of the user, so that the volume of the earphone can be adjusted rapidly.
In one possible implementation, the method further includes: the terminal stores first identification information of the voice of the user; the terminal determines the voice information of the current environmental noise; the terminal determines a current state of the user based on the voice information of the current environmental noise and the first identification information.
In this embodiment, whether or not the user is in a conversation is determined by taking into consideration the voice characteristics, which are the identification information of the voice of the user.
In one possible implementation, the method further includes: the terminal stores second identification information of the audio currently played by the earphone; the terminal determines a current state of the user based on the voice information, the first identification information and the second identification information of the current environmental noise.
In the method, whether the user is in a conversation or not can be accurately determined by considering the identification information of the voice of the user and the identification information of the audio currently played by the earphone.
In one possible implementation manner, the terminal stores a first result of whether the user is actively denoising each of different environmental noises in a non-dialogue state; and the terminal controls the earphone to actively reduce noise under the condition that the first result of the current matching is yes according to the current state and the current environmental noise.
In the mode, the volume of the in-ear sound is adjusted through the active noise reduction setting of the user history, so that the expectation of the user on volume adjustment in an un-conversational state can be better met, and the user does not need to manually and repeatedly adjust the volume setting to adapt to the current environment.
In one possible implementation manner, the terminal stores a second result of whether the environmental noise is played or not by each of different environmental noises in a dialogue state of the user; and the terminal controls the earphone to play the current environmental noise under the condition that the second result of the current matching is yes according to the current state and the current environmental noise.
In the mode, the content of the in-ear sound is adjusted according to the historical preference setting of whether the environmental noise is played or not by the user, so that the expectation of the user on the content of the in-ear sound in a dialogue state can be better met, and smooth dialogue of the user is ensured.
In one possible implementation, the terminal does not match the volume value according to the current state and the current ambient noise; when the current state is an un-conversational state, the volume of the earphone is kept unchanged, active noise reduction is carried out or the earphone is played according to a target volume value; wherein the target volume value is determined based on the manually adjusted volume value of each of the current ambient noise, the current state, and the different ambient noise in the non-conversational state of the user.
In the mode, under the condition that the setting of the historical volume of the user is not met in the non-dialogue state, the earphone is controlled to keep the volume setting, actively reduce noise or determine the target volume value play according to the setting of the historical volume, so that the user experience is improved under the condition that the user does not need to manually adjust.
In one possible implementation, the terminal does not match the volume value according to the current state and the current ambient noise; in the current state being a dialogue state, the earphone pauses playing the target audio, reduces the volume of the target audio or plays the current environmental noise; wherein the target audio does not include current ambient noise.
In this way, under the condition that the setting of the volume of the user history is not satisfied in the dialogue state, the earphone is controlled to pause playing, play the current environmental noise or reduce the volume of the target audio, so that the dialogue is ensured to be smoothly carried out without manual adjustment of the user.
In one possible implementation manner, the terminal stores a third result of whether playing is paused or not for each of different environmental noises in a dialogue state of the user; and the terminal controls the earphone to pause playing under the condition that the current state and the current environmental noise are not matched with the volume value and the matched third result is yes.
In the mode, the volume of the in-ear sound is adjusted through the historical setting of whether to pause playing or not of the user, so that the expectations of the user on the environment volume of the in-ear sound can be better met, and the conversation can be ensured to be carried out smoothly without manual adjustment of the user.
In a possible implementation manner, the earphone is provided with a dialogue prompt lamp, and the method further includes: the terminal determines the state of the user as a dialogue state; the dialogue prompting lamp of the earphone enters a prompting state.
In the mode, in the dialogue state, the dialogue prompt lamp prompts, so that the user dialogue object is ensured to know that the user can normally communicate although wearing the earphone, and the dialogue experience is ensured.
In one possible implementation, the method further includes: when the terminal detects that the volume is manually adjusted, determining the current manually adjusted volume value; the terminal stores the current manually adjusted volume value as the manually adjusted volume value at the current state of the user and the current ambient noise.
In the method, after the user manually adjusts the volume, the volume value which is manually adjusted under the same noise and stored before is refreshed by the volume value which is manually adjusted currently, so that the volume setting preference of the user is met in real time.
In a second aspect, an embodiment of the present application provides a method for controlling an earphone, where the method is applied to a terminal, the terminal is connected with an earphone, and the earphone is in a wearing state, and the method includes: storing the volume value manually adjusted by the user under different environmental noises in a dialogue state and/or a non-dialogue state; determining current ambient noise; determining a current state of a user; and determining the currently matched volume value according to the current state and the current environmental noise so that the earphone plays according to the currently matched volume value.
The beneficial effects of the embodiment are described above, and will not be described in detail herein.
In one possible implementation, determining the current state of the user includes: first identification information of a user's voice is stored; determining the voice information of the current environmental noise; the current state of the user is determined based on the voice information of the current environmental noise and the first identification information.
The beneficial effects of the implementation are referred to above, and are not described in detail here.
In one possible implementation, determining the current state of the user based on the voice information of the current environmental noise and the first identification information includes: storing second identification information of the audio currently played by the earphone; the current state of the user is determined based on the voice information, the first identification information, and the second identification information of the current environmental noise.
The beneficial effects of this mode are referred to above and will not be described in detail here.
In one possible implementation, determining the current state of the user based on the voice information of the current environmental noise and the first identification information includes: storing second identification information of the audio currently played by the earphone; the current state of the user is determined based on the voice information, the first identification information, and the second identification information of the current environmental noise.
The beneficial effects of this mode are referred to above and will not be described in detail here.
In one possible implementation, the method further includes: storing a first result of whether the user actively reduces noise or not for different environmental noises in a non-dialogue state; and controlling the earphone to actively reduce noise under the condition that the first result of the current matching is yes according to the current state and the current environmental noise.
The beneficial effects of this mode are referred to above and will not be described in detail here.
In one possible implementation, the method further includes: storing a second result of whether the user plays the environmental noise or not in each of different environmental noises in a dialogue state; and controlling the earphone to play the current environmental noise under the condition that the second result of the current matching is yes according to the current state and the current environmental noise.
The beneficial effects of this mode are referred to above and will not be described in detail here.
In one possible implementation, the method further includes, according to the current state and the current ambient noise not matching the volume value: when the current state is an un-conversational state, controlling the volume of the earphone to be unchanged, actively reducing noise or playing according to a target volume value; the target volume value is determined by the terminal based on the current environmental noise, the current state and the manually adjusted volume value of each of different environmental noises of which the user is in an un-talking state.
The beneficial effects of this mode are referred to above and will not be described in detail here.
In one possible implementation, the method further includes, according to the current state and the current ambient noise not matching the volume value: under the condition that the current state is a dialogue state, controlling the earphone to pause playing the target audio, reducing the volume of the target audio or playing the current environmental noise; wherein the target audio does not include current ambient noise.
The beneficial effects of this mode are referred to above and will not be described in detail here.
In one possible implementation, the method further includes, based on the current state and the current ambient noise not matching to the volume value: and storing a third result of whether the user pauses the playing or not in each of different environmental noises in a dialogue state, and controlling the earphone to pause the playing under the condition that the third result matched with the current state and the current environmental noise is yes.
The beneficial effects of this mode are referred to above and will not be described in detail here.
In one possible implementation, the method further includes: determining the state of the user as a dialogue state; and controlling the dialogue prompt lamp of the earphone to enter a prompt state.
The beneficial effects of this mode are referred to above and will not be described in detail here.
In one possible implementation, the method further includes: when detecting that the user manually adjusts the volume, determining the current manually adjusted volume value; the current manually adjusted volume value is stored as the manually adjusted volume value at the current state of the user and the current ambient noise.
The beneficial effects of this mode are referred to above and will not be described in detail here.
In a third aspect, an embodiment of the present application provides a method for controlling an earphone, where the method is applied to the earphone, where the earphone is connected to a terminal, and the terminal stores a volume value manually adjusted by a user under different environmental noises in a conversational state and/or a non-conversational state when the earphone is in a wearing state; the method comprises the following steps: playing according to the currently matched volume value; the currently matched volume value is determined based on the determined state of the user and the current environmental noise under the condition that the terminal wears the earphone by the user.
The beneficial effects of the embodiment are described above, and will not be described in detail herein.
In one possible implementation, the earphone is provided with a dialogue prompt; the method further comprises the steps of: when the terminal determines that the current state of the user is a dialogue state, the dialogue presenting unit is controlled to be in a presenting state.
The beneficial effects of this mode are referred to above and will not be described in detail here.
In one possible implementation manner, the terminal stores a first result of whether the user is actively making noise reduction for each of different environmental noises in a non-conversational state, and the method further includes: and actively reducing noise under the condition that the terminal determines that the first result of current matching is yes according to the current state and the current environmental noise.
The beneficial effects of this mode are referred to above and will not be described in detail here.
In one possible implementation manner, the terminal stores a second result of whether the environmental noise is played or not by each of different environmental noises in which the user is in a dialogue state, and the method further includes: and playing the current environmental noise under the condition that the second result of the current matching is that the terminal determines the current state and the current environmental noise.
The beneficial effects of this mode are referred to above and will not be described in detail here.
In one possible implementation, the terminal does not match the volume value according to the current state and the current environmental noise, and the method further includes: when the current state is an un-conversational state, the volume is kept unchanged, active noise reduction is carried out or the sound is played according to a target volume value; the target volume value is determined by the terminal based on the current environmental noise, the current state and the manually adjusted volume value of each of different environmental noises of which the user is in a non-dialogue state.
The beneficial effects of this mode are referred to above and will not be described in detail here.
In one possible implementation, the terminal does not match the volume value according to the current state and the current environmental noise, and the method further includes: in the current state being a dialogue state, pausing playing the target audio, reducing the volume of the target audio or playing the current environmental noise; wherein the target audio does not include current ambient noise.
The beneficial effects of this mode are referred to above and will not be described in detail here.
In one possible implementation, the terminal does not match the volume value according to the current state and the current environmental noise, and the method further includes: and under the condition that the terminal stores the third results of whether the playing is paused or not of the different environmental noises of the user in the dialogue state, and the third result of current matching is determined to be yes according to the current state and the current environmental noise, pausing the playing.
The beneficial effects of this mode are referred to above and will not be described in detail here.
In a fourth aspect, embodiments of the present application provide a headset control system, which may include a terminal and a headset; wherein the system is for performing the method provided in the first aspect.
In a fifth aspect, an embodiment of the present application provides an earphone control device, including: at least one memory for storing a program; at least one processor for executing the program stored in the memory, the processor being adapted to perform the method provided in the second aspect or to perform the method provided in the third aspect when the program stored in the memory is executed.
In a sixth aspect, embodiments of the present application provide an earphone control device, wherein the device executes computer program instructions to perform the method provided in the second aspect, or to perform the method provided in the third aspect. The apparatus may be, for example, a chip, or a processor.
In one example, the apparatus may comprise a processor, which may be coupled to the memory, read instructions in the memory and perform the method provided in the second aspect, or perform the method provided in the third aspect, in accordance with the instructions. The memory may be integrated into the chip or the processor, or may be separate from the chip or the processor.
In a seventh aspect, embodiments of the present application provide a computer storage medium having instructions stored therein that, when executed on a computer, cause the computer to perform the method provided in the second aspect, or to perform the method provided in the third aspect.
In an eighth aspect, embodiments of the present application provide a computer program product comprising instructions which, when run on a computer, cause the computer to perform the method provided in the second aspect, or to perform the method provided in the third aspect.
Drawings
Fig. 1 is a system architecture diagram of an earphone control system provided in an embodiment of the present application;
fig. 2a is a schematic diagram of an application scenario provided in an embodiment of the present application;
fig. 2b is a schematic diagram of another application scenario provided in an embodiment of the present application;
fig. 3 is a schematic structural diagram of a terminal provided in an embodiment of the present application;
fig. 4 is a schematic structural diagram of an earphone according to an embodiment of the present application;
fig. 5 is a schematic flow chart of a headset control scheme performed by the terminal provided in fig. 1;
fig. 6 is a flowchart of a headset control method according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application more clear, the technical solutions of the embodiments of the present application will be described below with reference to the accompanying drawings.
In the description of embodiments of the present application, words such as "exemplary," "such as" or "for example," are used to indicate by way of example, illustration, or description. Any embodiment or design described herein as "exemplary," "such as" or "for example" is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary," "such as" or "for example," etc., is intended to present related concepts in a concrete fashion.
In the description of the embodiments of the present application, the term "and/or" is merely an association relationship describing an association object, and indicates that three relationships may exist, for example, a and/or B may indicate: a alone, B alone, and both A and B. In addition, unless otherwise indicated, the term "plurality" means two or more. For example, a plurality of systems means two or more systems, and a plurality of terminals means two or more terminals.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating an indicated technical feature. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature. The terms "comprising," "including," "having," and variations thereof mean "including but not limited to," unless expressly specified otherwise.
Fig. 1 is a system architecture diagram of a headset 120 control system according to an embodiment of the present application. As shown in fig. 1, the system includes: a terminal 110 and an earphone 120; the earphone 120 is in a wearing state, that is, the user U1 wears the earphone 120.
In one possible implementation, the headset 120 may be a wired headset or a wireless headset. If the headset 120 is a wired headset 120, the terminal 110 and the wired headset perform wired communication through the data line of the wired headset itself. If headset 120 is a wireless headset, terminal 110 and the wireless headset communicate over a wireless network, which may include, for example, the internet, a Local Area Network (LAN), a Wide Area Network (WAN), a wireless local area network (YLAN), a Metropolitan Area Network (MAN), and the like, or any combination thereof. It will be appreciated that the wireless network between the terminal 110 and the wireless headset may use any known wireless communication protocol to enable communication, preferably close range wireless communication, such as bluetooth (bluetooth).
In one possible implementation, the terminal 110 involved in this embodiment may be an electronic device with a headset 120 connection and headset 120 control functions, and with storage and data processing capabilities.
In one example, the terminal 110 may be a mobile phone, a tablet computer, a smart speaker, a car set, etc. Exemplary embodiments of the terminal 110 referred to in this embodiment include, but are not limited to, electronic devices that carry iOS, android, windows, hong system (Harmony OS) or other operating systems.
In one example, the terminal 110 may have a sensor, such as a microphone, for capturing sound. For example, after the terminal 110 is connected to the earphone 120, the microphone of the terminal 110 collects the sound of the surrounding environment at preset time intervals, and determines the current environmental noise, i.e. the audio signal collected by the microphone. The environmental noise may be understood as sound of the surrounding environment of the terminal 110, and in practical application, the environmental noise may be understood as an audio signal obtained by the terminal 110 collecting the sound of the surrounding environment. Note that, considering that the environmental noise is not stable, the current environmental noise in the present embodiment refers to the sound of the environment surrounding the terminal 110 within a certain period of time, such as 1 s.
In one example, terminal 110 is installed with a plurality of applications. By way of example, music applications 1101, such as QQ music, cool dog music; illustratively, when the user U1 wants to listen to songs, the user U1 operates the terminal 110 to enable the terminal 110 and the headset 120 to be Bluetooth connected; after that, the user U1 opens the music application 1101 on the terminal 110, and selects the song a to be played, and then plays the song a, and the earphone 120 plays the song a.
In one example, the terminal 110 is provided with a volume key 1102, and in a state in which the user U1 wears the earphone 120, the user U1 presses the volume key 1102 on the terminal 110, thereby adjusting the volume of the earphone 120, and at the same time, the terminal 110 may store a manually adjusted volume value. In addition, the terminal 110 displays the volume bar 1103 so that the user U1 knows the volume adjustment, and further, the user can directly press the volume bar 1103 and slide up or down to adjust the volume of the earphone 120.
In this embodiment, the terminal 110 stores the volume value manually adjusted under different environmental noises when the user U1 is in a conversational state and/or a non-conversational state; then, the terminal 110 determines the current state and the current environmental noise of the user U1, matches the stored dialogue state and/or different environmental noise under the non-dialogue state according to the current state and the current environmental noise, uses the manually adjusted volume value under the same state and the same environmental noise as the currently matched volume value, and controls the earphone 120 to play according to the currently matched volume value, thereby adjusting the volume of the earphone 120 according to the historical volume preference setting of the user U1, better meeting the expectations of the user U1 for volume adjustment, and avoiding the need of manually and repeatedly adjusting the volume setting by the user U1 to adapt to the current environment. In addition, if the user U1 needs to adjust the volume, the volume of the earphone 120 can be quickly adjusted because the volume is adjusted based on the historical volume setting without repeatedly adjusting the volume of the earphone.
Where a conversation state may be understood as a face-to-face conversation of two users. The non-conversational state may be understood as when the user is not engaged in a face-to-face conversation, nor is he engaged in a conversation (e.g., making a call, voice-over-a-little, etc.).
The manually adjusted volume value stored by the terminal 110 is understood to be the volume value of the target audio indicated by the audio signal sent by the terminal 110 to the earphone 120, and may also be understood to be the volume value of the system sound of the terminal 110. By way of example, the target audio does not include sound of the current surrounding environment, and may be audio of interest to a user of music, video, game sound effects, or the like. Further, the manually adjusted volume value is a volume value used in a preset period after the user U1 manually adjusts the volume value. For example, the volume value of the earphone 120 is A2, the current intensity value of the environmental noise is d, the volume value V2 is obtained by pressing the volume key 1102 of the terminal 110 to increase the volume and then decreasing the volume, and the manually adjusted volume value of the current intensity value d of the environmental noise is V2 if the manual adjustment is not performed again within 1 minute thereafter.
The noise may be the same or different for different states. If the manually adjusted volume value is 0, it indicates that the headphone 120 is playing silently. The present embodiment can describe the environmental noise by an intensity value and a frequency value, and the following description will take the intensity value as an example, and the larger the intensity value, the larger the environmental noise.
It is noted that the present embodiment is applied under one audio type in consideration of the fact that it has different habits for different audio types, in other words, the volume values manually adjusted under different environmental noises in the dialogue state and/or the non-dialogue state of the user are for the same audio type, and the corresponding headphones play the target audio under the audio type. Wherein, the different audio types can be song type, game audio type and video type. Of course, in some possible cases, the songs may also be divided by the characteristics of different singers, or divided based on the voice ranges of different songs, so as to ensure that the songs can better conform to the habit of the user. In practical applications, the terminal 110 stores the volume values manually adjusted under different environmental noises in the dialogue state and/or the non-dialogue state of the user corresponding to the different audio types, so that under the audio type of the target audio to be played by the user, the volume of the earphone is adjusted based on the volume values manually adjusted under different environmental noises in the dialogue state and/or the non-dialogue state of the user under the audio type, thereby meeting the user requirements and ensuring the user experience.
In one possible implementation, the terminal 110 cannot match the volume value; correspondingly, the terminal 110 may control the earphone 120 to turn on the active noise reduction mode in the non-dialogue state or turn on the pass-through mode in the dialogue state.
It should be noted that the earphone 120 may have multiple modes, and may be switched between the multiple modes, such as a noise reduction mode and a pass-through mode.
The earphone 120 can collect the sound of the surrounding environment after the through mode is started, then filters the noise in the environment after analysis and processing are performed inside, then starts the human voice gain, amplifies the sound of the human voice, and realizes easy communication without taking off the earphone. Notably, the pass-through mode is to adjust the ambient noise and not adjust the volume value of the target audio.
The noise reduction mode is understood to be, among other things, the generation of a counter sound wave equal to the noise of the surrounding environment, thereby reducing the noise of the environment.
In one possible implementation, the terminal 110 cannot match the volume value; correspondingly, the terminal 110 may control the earphone 120 to determine, based on the current state and the current environmental noise, a playing policy of the earphone 120 under the environmental noise which is the same as the current state and similar to the current environmental noise as a playing policy under the current environmental noise, and control playing of the earphone 120 based on the playing policy, so as to play in a manner preferred by the user, thereby improving user requirements.
In one example, in a dialog state, the play policy may be to turn on a pass-through mode, pause play, mute, or reduce the volume of the target audio.
In one example, in a non-conversational state, the play policy may be to turn on the noise reduction mode or to keep the volume unchanged.
In one example, current ambient noise is similar to ambient noise, which may be understood as ambient noise whose intensity values differ less.
In one possible implementation, the terminal 110 may store, through multiple data sets, a play policy of headphones, each of which is in a different state, each of which is in a different environmental noise.
In one example, for each of a plurality of data sets, a play policy of the headset that represents a last adjustment of the user under a state and an environmental noise, such as a play policy of the last adjusted headset, is used. The different data sets are divided into a partial set related to dialog states and another partial set related to non-dialog states. The play policy controls the play of the earphone 120, which may include a manually adjusted volume value (play according to the value), whether to start the active noise reduction mode, whether to start the pass-through mode, whether to pause the play, and so on.
For example, the set may include at least a state of the user, such as a dialog state or a non-speaking state, an intensity value of ambient noise. The non-talking state and the talking state can be identified by numerals or letters, for example, 0 indicates the non-talking state, 1 indicates the talking state, and this is described below as an example.
Further, if the user state is a non-dialogue state, the method may further include a manually adjusted volume value, a first result of whether to start the active noise reduction mode at the manually adjusted volume value, and first setting information when the active noise reduction mode is started. The first setting information is used for indicating a value of a parameter to be set in the active noise reduction mode, and can comprise types of adaptive noise reduction, mild noise reduction, balanced noise reduction or deep noise reduction selected by a user;
further, if the user state is a dialogue state, the method may further include a manually adjusted volume value, a second result of whether to open the pass-through mode at the manually adjusted volume value, and second setting information of the pass-through mode; or, the method can further comprise a third result of whether to pause playing, a second result of whether to start the pass-through mode under the pause playing, and second setting information of the pass-through mode. The second setting information may represent a value of a parameter to be set in the pass-through mode, and may include whether to turn on an enhancement, whether to turn on a pass-through balance, whether to reduce environmental noise, whether to require dialogue enhancement, and a pitch value. To increase the user experience, the volume value may include a pitch value of the male and/or a pitch value of the female, which may be employed if the user is judged to be talking to the male; if it is determined that the user is talking to a girl, the girl's pitch value may be used.
It should be appreciated that the play strategy of the earphone under the same state and intensity values of the ambient noise is represented by one data set; the state of the users of different data sets and the intensity value of the ambient noise are different. The current volume value that matches can be understood as a manually adjusted volume value in the same data set of the plurality of data sets as the current state and the current intensity value of the ambient noise.
In one example, the terminal 110 may determine a matched playing policy based on the current state and the current environmental noise, and control playing of the earphone 120 based on the playing policy, so as to adjust playing of the earphone 120 according to the historical preference setting of the user U1, thereby better meeting the expectation of the user U1 on the earphone, without manually and repeatedly adjusting the setting of the earphone by the user U1 to adapt to the current environment. In addition, if the user U1 needs to adjust the earphone, the earphone is adjusted on the basis of historical setting, repeated adjustment is not needed, and therefore the earphone can be quickly adapted to the current environment.
For example, in a dialog state, the target audio is played at a manually adjusted volume value and the pass-through mode is turned on, the target audio is paused or muted, or the volume value of the earphone is lowered at a preset volume reduction value of current ambient noise.
For example, in a non-dialog state, the target audio is played at a manually adjusted volume value and the active noise reduction mode is turned on, or the volume value is left unchanged.
In one example, the microphone of the terminal 110 continuously collects sounds of the surrounding environment each time the terminal 110 is connected to the earphone, so that the terminal 110 determines the current environmental noise and the current state of the user U1, and if the user U1 manually adjusts the volume value, the terminal 110 may determine the volume value manually adjusted by the user U1, replace the same current state and the volume value manually adjusted under the current environmental noise stored previously, or directly store the current state, the current environmental noise and the volume value manually adjusted by the user as one data set.
For example, for scenario 1 in fig. 2a, data sets [0, d, A2] are stored, where 0 in the data set represents the non-talk state, d represents the intensity value of the noise, and A2 represents the manually adjusted volume value, since the terminal 110 has not previously stored the non-talk state and the manually adjusted volume value A2 at the intensity value d of the noise.
For example, for scenario 3 in FIG. 2b, since terminal 110 previously stored data set [0, d, A2], then the updated data set [0, d, A3] is obtained by replacing A2 with A3.
In practical application, as shown in fig. 2a, in scenario 1, if the user U1 is in an un-talking state without performing a face-to-face talking or talking, the headset 120 plays the song a (the target audio) based on the volume value A1, the user U1 feels that the volume of the headset 120 playing the song a is relatively harsh or low audible, the user U1 presses the volume key 1102 on the terminal 110 to adjust the volume of the headset 120 to the satisfactory volume value A2, the volume value of the headset 120 is switched to A2, so that the user U1 listens to the song in a more comfortable state, and the terminal 110 stores the data set [0, d, A2]. Then, in the case 2, if the user is in the non-talking state again and the intensity value of the current environmental noise of the terminal 110 is d, the earphone 120 plays the music according to the volume value A2; then, in the scene 3, the user U1 presses the volume key 1102 on the terminal 110 to adjust the volume of the earphone 120 to a satisfactory volume value A3, the volume value of the earphone 120 is switched from A2 to A3, and the terminal 110 updates the data set [0, d, A2] to [0, d, A3]; then, in the scene 4, if the user is in the non-talking state again and the intensity value of the current environmental noise of the terminal 110 is d, the earphone 120 plays the music according to the volume value A3. The dialog states are similar and will not be described in any greater detail herein.
In one example, when the terminal 110 and the headset 120 start to connect, the headset 120 may be controlled based on the current environmental noise and the current state of the user U1 matching the plurality of data sets, and based on data in the same data set (i.e., the matched data set) of the plurality of data sets as the current state and the current environmental noise. Otherwise, in the current state of the user U1, the volume value of the terminal 110 at the last time the terminal 110 and the earphone 120 were disconnected is taken as the initial volume value. For example, if the current state is a talk state, the initial volume value is the volume value of the last time the terminal 110 and the earphone 120 were disconnected in the talk state.
For example, when the current state is a non-dialogue state, the terminal 110 controls the earphone 120 to play according to the manually adjusted volume value in the matched data set, and further, the matched data set indicates that the active noise reduction mode is turned on, and then the terminal 110 controls the earphone 120 to turn on the active noise reduction mode.
Illustratively, the current state is a dialogue state, and if the matched data set indicates to pause playing, the terminal 110 controls the earphone 120 to pause playing; if the matched data set does not indicate to pause playing, the terminal 110 controls the earphone 120 to play the target audio according to the manually adjusted volume value in the matched data set, and further, if the matched data set indicates to start the pass-through mode, the terminal 110 controls the earphone 120 to start the pass-through mode to play the speaking content in the current environmental noise.
In one possible implementation, the earphone 120 is provided with a dialogue indicator light, and when the terminal 110 determines that the current state of the user U1 is the dialogue state, the dialogue indicator light of the earphone 120 is controlled to be in the indicator state, for example, the dialogue indicator light intermittently blinks and continuously lights.
In one example, the earphone 120 may be provided with a hollowed pattern, and a dialogue prompting lamp is disposed in the earphone 120 corresponding to the hollowed pattern, so that when the dialogue prompting lamp is in a prompting state, the hollowed part in the hollowed image may emit light.
The terminal in this embodiment may be a device, such as a mobile phone, that is at least capable of being connected to the headset and capable of transmitting data to the headset.
Fig. 3 is a schematic structural diagram of a terminal embodiment provided by the present invention. As shown in fig. 3, the terminal 110 in this embodiment includes: processor 111, memory 112, antenna 1, antenna 2, communication module 113, audio module 114, speaker 114A, receiver 114B, microphone 114C, earphone interface 114D, keys 115, display 116, and sensor module 117.
It should be understood that the structure illustrated in the embodiments of the present application does not constitute a specific limitation on the terminal 110. In other embodiments of the present application, terminal 110 may include more or less components than illustrated, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Certain components of terminal 110 are described in detail below in conjunction with fig. 1.
The processor 111 may include one or more processors, for example, the processor 111 may include one or more of an application processor (application processor, AP), a modem (modem), a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neural-network processor (neural-network processing unit, NPU), etc. Wherein the different processors may be separate devices or may be integrated in one or more processors. For example, the processor 111 may process content to be displayed on a display window of an application program on the terminal 110. For example, the controller may generate operation control signals according to the instruction operation code and the timing signals to complete the control of the instruction and the execution of the instruction.
In one example, a memory may also be provided in the processor 111 for storing instructions and data. In some examples, the memory in processor 111 is a cache memory. The memory may hold instructions or data that is just used or recycled by the processor 111. If the processor 111 needs to reuse the instruction or data, it can be called directly from the memory to avoid repeated access, reduce the waiting time of the processor 111 and improve the efficiency of the system.
In one example, the processor 111 may include one or more interfaces. The interfaces may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (PulseCodeModulation, PCM) interface, a universal asynchronous receiver Transmitter (Universal Asynchronous Receiver/Transmitter, UART) interface, a mobile industry processor interface (MobileIndustryProcessorInterface, MIPI), a General purpose input/output (gpio), a subscriber identity module (subscriber identity module, SIM) interface, and/or a universal serial bus (UniversalSerialBus, USB) interface, among others.
It should be understood that the interfacing relationship between the modules illustrated in the embodiments of the present application is only illustrative, and does not limit the structure of the terminal 110. In other embodiments of the present application, the terminal 110 may also use different interfacing manners in the above embodiments, or a combination of multiple interfacing manners.
Memory 112 may include internal memory 121 for storing computer-executable program code, including instructions. The processor 111 executes various functional applications of the mobile terminal 110 and data processing by executing instructions stored in the internal memory 121. The internal memory 121 may include a storage program area and a storage data area. The storage program area may store an application program (such as a sound playing function, an image playing function, etc.) required for at least one function of the operating system, etc. The storage data area may store data (e.g., audio signals, phonebooks, etc.) created during use of the mobile terminal 110, and the like. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and the like.
In one example, memory 112 also includes an external memory card 122, such as a Micro SD card, coupled via an external memory interface, enabling expansion of the memory capabilities of mobile terminal 110. The external memory card communicates with the processor 111 through an external memory interface to implement a data storage function. For example, files such as music, video, etc. are stored in an external memory card.
In this embodiment, the memory 112 may also store a plurality of data sets, the details of which are described above.
The wireless communication function of the terminal 110 may be implemented by the antenna 1, the antenna 2, the communication module 113, a modem, a baseband processor, and the like. The communication module comprises a wireless communication module and a wired communication module.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in terminal 110 may be configured to cover a single or multiple communication bands. Different antennas may also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed into a diversity antenna of a wireless local area network. In other examples, the antenna may be used in conjunction with a tuning switch.
The mobile communication module may provide a solution for wireless communication including 2G/3G/4G/5G or the like applied on the terminal 110. The mobile communication module may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA), etc. The mobile communication module may receive electromagnetic waves from at least two antennas including the antenna 1, filter, amplify, etc. the received electromagnetic waves, and transmit the electromagnetic waves to a modem for demodulation. The mobile communication module can amplify the signal modulated by the modem and convert the signal into electromagnetic waves to radiate through the antenna 1. In some examples, at least some of the functional modules of the mobile communication module may be provided in the processor 111. In some examples, at least part of the functional modules of the mobile communication module may be provided in the same device as at least part of the modules of the processor 111.
The modem may include a modulator and a demodulator. The modulator is used for modulating the low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then transmits the demodulated low frequency baseband signal to the baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs sound signals through audio devices (not limited to speakers 114A, receivers 114B, etc.) or displays images or video through display 116. In some examples, the modem may be a stand-alone device. In other examples, the modem may be provided in the same device as the mobile communication module or other functional module, independent of the processor 111. In other examples, the mobile communication module may be a module in a modem.
The wireless communication module may provide solutions for wireless communication including wireless local area network (wireless local area networks, WLAN) (e.g., wireless fidelity (wireless fidelity, wi-Fi) network), bluetooth (BT), global navigation satellite system (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field wireless communication technology (near field communication, NFC), infrared technology (IR), etc., applied on the terminal 110. The wireless communication module may be one or more devices that integrate at least one communication processing module. The wireless communication module receives electromagnetic waves via the antenna 2, modulates the electromagnetic wave signals, filters the electromagnetic wave signals, and transmits the processed signals to the processor 111. The wireless communication module may also receive a signal to be transmitted from the processor 111, frequency-modulate it, amplify it, and convert it into electromagnetic waves to radiate out through the antenna 2.
In some examples, antenna 1 and the mobile communication module of terminal 110 are coupled, and antenna 2 and the wireless communication module are coupled, such that terminal 110 may communicate with a network and other devices through wireless communication techniques. The wireless communication techniques may include the Global System for Mobile communications (global system for mobile communications, GSM), general packet radio service (general packet radio service, GPRS), code division multiple access (code division multiple access, CDMA), wideband code division multiple access (wideband code division multiple access, WCDMA), time division code division multiple access (time-division code division multiple access, TD-SCDMA), long term evolution (long term evolution, LTE), fifth generation, new radio, NR), BT, GNSS, WLAN, NFC, FM, and/or IR techniques, among others. The GNSS may include a global satellite positioning system (global positioning system, GPS), a global navigation satellite system (global navigation satellite system, GLONASS), a beidou satellite navigation system (beidou navigation satellite system, BDS), a quasi zenith satellite system (quasi-zenith satellite system, QZSS) and/or a satellite based augmentation system (satellite based augmentation systems, SBAS).
Terminal 110 may implement audio functions through an audio module 114, a speaker 114A, a receiver 114B, a microphone 114C, an earphone interface 114D, an application processor, and the like. Such as music playing, recording, etc.
The audio module 114 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 114 may also be used to encode and decode audio signals. In some examples, the audio module 114 may be disposed in the processor 111, or some functional modules of the audio module 114 may be disposed in the processor 111.
The speaker 114A, also referred to as a "horn," is used to convert audio electrical signals into sound signals. Terminal 110 may listen to music, or to hands-free calls, through speaker 114A.
A receiver 114B, also referred to as a "earpiece", is used to convert the audio electrical signal into a sound signal. When terminal 110 receives a call or voice message, it may receive voice by placing receiver 114B close to the human ear.
Microphone 114C, also referred to as a "microphone" or "microphone", is used to convert sound signals into electrical signals. When making a call or transmitting voice information, the user can sound near the microphone 114C through the mouth, inputting a sound signal to the microphone 114C. Terminal 110 may be provided with at least one microphone 114C. In other examples, terminal 110 may be provided with two microphones 114C, and may implement a noise reduction function in addition to collecting sound signals. In other embodiments, the terminal 110 may be further provided with three, four or more microphones 114C to collect sound signals, reduce noise, identify the source of sound, implement directional recording functions, etc.
The earphone interface 114D is used to connect a wired earphone. The earphone interface 114D may be a USB interface 130 or a 3.5mm open mobile terminal platform (open mobile terminal platform, OMTP) standard interface, a american cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
The keys 115 include a power key, a volume key, an input keyboard, and the like. The keys 115 may be mechanical keys. Or may be a touch key. Terminal 110 may receive key inputs, generating key signal inputs related to user settings of terminal 110 and function control.
Terminal 110 implements display functions via a GPU, display 116, and an application processor, etc. The GPU is a microprocessor for image processing, and is connected to the display 116 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 111 may include one or more GPUs that execute program instructions to generate or change display information.
The display 116 is used to display images, videos, and the like. The display 116 includes a display panel. The display panel may employ a liquid crystal display (liquid crystal display, LCD), an organic light-emitting diode (OLED), an active-matrix organic light emitting diode (AMOLED), a flexible light-emitting diode (flex), a mini, a Micro-OLED, a quantum dot light-emitting diode (quantum dot light emitting diodes, QLED), or the like. In some examples, terminal 110 may include one or more display 116. In one example, the display 116 may be used to display an interface for an application, display a display window for an application, and so forth.
The sensor module 117 may include a pressure sensor, a gyroscope sensor, a barometric sensor, a magnetic sensor, an acceleration sensor, a distance sensor, a proximity sensor, a fingerprint sensor, a temperature sensor, a touch sensor, an ambient light sensor, a bone conduction sensor, and the like.
In addition, the terminal 110 may further include a motor, an indicator, a subscriber identity module (subscriber identification module, SIM) card interface, a camera, a universal serial bus (universal serial bus, USB) interface, a charge management module, a power management module, a battery, and the like. The charging function of the terminal 110 may be implemented by a charging management module, a power management module, a battery, and the like.
The earphone in this embodiment is at least connected to the terminal, and can receive and process signals sent by the terminal and play sound.
Fig. 4 is a schematic structural diagram of a terminal embodiment provided by the present invention. As shown in fig. 4, the earphone 120 in the present embodiment includes: processor 121, memory 122, communication module 123, speaker 124A, microphone 124B, sensor module 125, dialog indicator 126.
It should be understood that the structure illustrated in the embodiments of the present application does not constitute a specific limitation on the headset 120. In other embodiments of the present application, the headset 120 may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components may be provided. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Certain components of the headset 120 are described in detail below in conjunction with fig. 1.
The details of processor 121, memory 122, speaker 124A, microphone 124B, and communication module 125 are described above. However, the function implemented by the headset 120 is relatively single compared to the terminal 110, for example, the headset 120 generally does not have an external memory interface, the processor 121 generally needs to implement audio functions (e.g., playing, recording) and process data collected by the sensor module 125, and the communication module 124 is generally a short-range communication module, which is a solution for providing short-range wireless communication, typically a bluetooth communication module.
The sensor module 125 may include a proximity light sensor, as well as a motion sensor (e.g., a 3-axis acceleration sensor, a gyroscope, a geomagnetic sensor, etc.). Wherein, the proximity light sensor can detect whether the user wears the earphone, and the motion sensor can detect the motion state of the user.
The dialogue cue light 126 is used to make a cue state, such as flashing, continuously lighting, when the user is in a dialogue state. So that other users can know that the user wearing the headset is listening, ensuring a conversational experience. In one example, the earphone can be provided with a hollowed pattern, and a conversation prompt lamp is arranged at a position corresponding to the hollowed pattern inside the earphone, so that when the conversation prompt lamp is in a prompt state, the hollowed part in the hollowed image can emit light.
A scheme of earphone control by the terminal in the earphone control system shown in fig. 1 will be described in detail below. The earphone is in a wearing state, and the terminal is connected with the earphone.
Fig. 5 shows a scheme of earphone control by a terminal in the earphone control system shown in fig. 1. As shown in fig. 5, the scheme of the terminal for performing earphone control includes:
step 501, store the manually adjusted volume value under different environmental noise in dialogue state and/or non-dialogue state, whether to start the first result of active noise reduction mode, whether to start the second result of pass-through mode, and whether to pause the third result of playing.
In one possible implementation, the terminal has a plurality of data sets stored in advance, each data set indicating the last play policy of the earphone for the user in a state and an environmental noise. Details are referred to above, and will not be described in detail here.
Step 502, current environmental noise and current voice information of the environmental noise.
The terminal is provided with a microphone, the microphone is started when the earphone is in a wearing state and connected with the earphone, and the microphone is controlled to continuously collect sounds of surrounding environment, so that current environmental noise is obtained.
Furthermore, the terminal can also obtain the current information of the environmental noise. The information may include the current intensity value of the ambient noise and the voice information. The voice information is used for representing the voice of the current noise environment, and may include voice print characteristics (characteristic values of a plurality of characteristics) of different speakers, speaking content, speaking speed, speaking intonation and other information.
Step 503, determining the current state of the user based on the voice information of the current environmental noise.
In one possible implementation manner, the terminal determines that the current state of the user is a dialogue state when it is determined that the current environmental noise contains the sound of the user and the sound of other users, the user does not follow or sing, and the earphone is not in the dialogue state.
The sound of other users is not a stop prompt sound on vehicles such as buses and subways, but can be the sound of household equipment. The call state can be understood as a communication state such as call making, micro-communication voice and the like which do not need face-to-face communication.
In one example, the terminal stores first identification information of a user's voice; correspondingly, the terminal judges whether the voice of the user exists in the current environmental noise or not based on the first identification information of the voice of the user and the voice information of the current environmental noise. The first identification information is used for judging whether the dialogue sound of the user exists or not, and can be understood as voiceprint information, and the voiceprint information comprises respective values of a plurality of features.
Further, if the terminal determines that other voice exists based on voice information of the environmental noise and the speaking content is not the arrival prompt content of buses, subways and the like, the terminal can consider that the voice of other users exists.
In one example, the terminal stores second identification information of the audio currently played by the headphones. The terminal is matched based on the second identification information of the audio currently played by the earphone and the voice information of the current environmental noise, if the second identification information is matched with the voice information, the user is determined to read or sing, and otherwise, the voice information is not the voice information.
In an example, if the user dialogues with the smart home device, the terminal may store the trigger word of the smart home device and the third identification information of the sound of the smart home device in advance, and when the user is identified to speak the trigger word of the smart home and the voice information of the environmental noise matches with the third identification information, the voice of other users may be considered to exist in the current noise.
In one possible implementation manner, the terminal determines that the current state of the user is a conversation state when it is determined that the current environmental noise exists in the voice of the user, the user does not read or sing, and the earphone is not in the conversation state. Details are referred to above, and will not be described in detail here.
In one possible implementation manner, the terminal determines that the current state of the user is a non-dialogue state when it determines that the current environmental noise does not exist in the voice of the user and is not in the dialogue state. Details of whether the current environmental noise includes the voice of the user are referred to above, and will not be described in detail herein.
Step 504, determining whether there is a currently matched volume value based on the current environmental noise and the current state, if so, executing step 505a1 if the current state is a non-dialogue state; the current state is a dialogue state, and step 505b1 is executed; if not, the current state is a non-dialogue state, and step 506a is performed; in the case where the current state is a dialogue state, step 506b is performed.
In one example, a data set of the plurality of data sets that is the same as the current state and the current ambient noise is determined, and a manually adjusted volume value of the data set is taken as a current matching volume value.
It should be noted that the currently matched volume value is the volume value preferred by the user.
Step 505a1, determining whether to start an active noise reduction mode based on the current environmental noise and the current state; if yes, step 505a2 is performed, and if not, step 505c is performed.
Step 505a2, the earphone is controlled to play the target audio based on the currently matched volume value and start the active noise reduction mode, and step 507 is executed.
For example, determining a matched data set which is the same as the current state and the current environmental noise in a plurality of data sets, and if the first result of whether to start the active noise reduction mode in the data set is yes, controlling the earphone to enter the active noise reduction mode based on the first setting information of the active noise reduction mode in the data set; otherwise, the earphone does not need to start the active noise reduction mode, so that the expectation of the user on the volume is better met, and the user does not need to manually and repeatedly adjust the volume setting to adapt to the current environment.
Step 505b1, determining whether a pass-through mode needs to be turned on based on the current environmental noise and the current state; if yes, step 505b2 is performed, and if not, step 506 is performed.
Step 505b2, the earphone is controlled to play the target audio and turn on the pass-through mode based on the currently matched volume value, and step 507 is executed.
For example, determining a matched data set which is the same as the current state and the current environmental noise in the plurality of data sets, and if the second result of whether to start the pass-through mode in the set is yes, controlling the earphone to enter the pass-through mode based on the second setting information of the pass-through mode in the data set; otherwise, the earphone does not need to start a through mode, so that the expectation of the user on playing the content is better met, and the user can communicate smoothly.
In addition, the earphone plays the processed environmental noise sent by the terminal. For example, if the terminal determines that the terminal can collect the sound of the surrounding environment more accurately than the earphone, the terminal can process the current environmental noise according to the second setting information in the matched data set, and the earphone plays the environmental noise processed by the terminal.
For example, the terminal judges that it is not placed in the user pocket and determines the position of the object with which the user is talking, that is, the position of the sound source, based on the sound source localization technique; then, if the terminal judges that the distance between the microphone of the terminal and the sound source is smaller than the distance between the microphone of the earphone and the sound source, the terminal can be considered to accurately collect the sound of the surrounding environment.
Step 505c, controlling the earphone to play the target audio based on the currently matched volume value, and executing step 507.
It is noted that if the currently matched volume value is equal to 0, the playback is muted.
Step 506a, determining a target volume value under the current environmental noise based on the pre-stored volume value of the non-talking state in different environmental noise, controlling the earphone to play the target audio based on the target volume value under the current environmental noise, and executing step 509.
In one possible implementation, the terminal determines a relationship between the intensity value of the environmental noise in the non-talk state and the manually adjusted volume value based on the stored intensity value and the manually adjusted volume value of each of the different environmental noise in the non-talk state, and determines the target volume value for the current environmental noise based on the relationship.
In one possible implementation, the terminal determines the target volume value under the current ambient noise based on a maximum intensity value and a minimum intensity value of the stored intensity values of the different ambient noise in the non-talk state, respectively. The specific implementation mode is as follows:
implementation 1: if the current ambient noise intensity value is less than or equal to the minimum noise value, the target volume value may be the manually adjusted volume value in the non-talk state and at the minimum intensity value.
Implementation 2: if the current ambient noise intensity value is greater than or equal to the maximum noise value, the target volume value may be the manually adjusted volume value in the non-talk state and at the maximum intensity value.
Further, the terminal can also start an active noise reduction mode, so that user experience is ensured.
Implementation 3: if the current environmental noise intensity value is larger than the minimum intensity value and smaller than the maximum intensity value, the terminal can determine the first noise and the second noise with the smallest difference value with the current environmental noise intensity value from different environmental noises in an un-conversational state; wherein the intensity value of the first noise is greater than the intensity value of the current environmental noise; the intensity value of the second noise is smaller than the intensity value of the current environmental noise; correspondingly, if the volume value of the earphone is located in a section formed by the manually adjusted volume values of the first noise and the second noise in the non-talking state, the target volume value is the volume value of the earphone, that is, the volume value is kept unchanged, otherwise, the target volume value is interpolated and determined based on the manually adjusted volume values of the first noise and the second noise in the non-talking state, for example, linear interpolation.
Implementation 4: determining an error allowable range of the intensity value of the current environmental noise, determining a plurality of environmental noises with intensity values within the error allowable range from different environmental noises in an un-talking state, and forming intervals for manually adjusted volume values of the environmental noises in the talking state; correspondingly, if the volume value of the earphone is in the interval, the target volume value is the volume value of the earphone, namely, the target volume value is kept unchanged, otherwise, the target volume value is interpolated and determined based on the manually adjusted volume values of the environment noise in the non-dialogue state.
Implementation 5: if the current environmental noise intensity value is larger than the minimum intensity value and smaller than the maximum intensity value, the terminal can determine the environmental noise with the smallest difference value with the current environmental noise intensity value and not larger than the preset threshold value in different environmental noise in the non-dialogue state; the target volume value is a manually adjusted volume value at ambient noise where the non-talk state and the difference are minimal and not greater than a preset threshold.
Step 506b1, determining whether to pause playing based on the current environmental noise and the current state; if yes, go to step 506b2; if not, step 506c1 is performed.
For example, a matched data set which is the same as the current state and the current environmental noise in the plurality of data sets is determined, if the third result of whether to pause playing in the set is yes, the playing is determined to be paused, otherwise, playing does not need to be paused.
Step 506b2, determining an environmental sound processing strategy under the current environmental noise based on the pre-stored setting information of the transparent mode during the pause of each of the different environmental noises in the dialogue state.
In one possible implementation manner, the terminal may determine the environmental sound processing policy under the current environmental noise based on the second setting information of the through mode when playing is paused for each of the different environmental noises in the session state. The environmental sound processing strategy may include whether to increase sound, start dialogue enhancement, pitch value, reduce sound except human voice in environmental noise, etc.
For example, the environmental sound processing policy may be second setting information of a through mode for suspending playback in a dialogue state and a target noise, or second setting information of a through mode for suspending playback, which is often used in a dialogue state and a target noise; the target noise is noise with the smallest difference value between the intensity value of the target noise and the intensity value of the current environmental noise in different environmental noise in a dialogue state.
In one example, the ambient sound processing strategy includes the volume value of a human voice and the volume value of other noise.
The terminal obtains a first relation curve between the intensity value of the environmental noise and the volume value of the voice in the environmental noise in the dialogue state and a second relation curve between the intensity value of the environmental noise and the volume value of other voices except the voice in the environmental noise based on the stored respective intensity values of different environmental noises in the dialogue state and second setting information in the through mode of pause broadcasting; determining a volume value of the speaking content of the current environmental noise based on the first curve; the volume value of other sounds than the human voice in the current environmental noise is determined based on the second curve.
Step 506b3, controlling the earphone to play the environmental noise processed based on the environmental sound processing strategy and suspending playing the target audio.
It should be understood that, generally, the earphone collects and processes the sound of the surrounding environment, and if the terminal determines that the terminal can collect and process the sound of the surrounding environment more accurately than the earphone, the terminal can collect and process the sound of the surrounding environment, and the earphone plays the processed environmental noise.
Correspondingly, the environmental sound processing strategy comprises the sound volume value of the voice and the sound volume value of other noises, and correspondingly, the terminal controls the earphone to play the voice in the current environmental noise based on the sound volume value of the voice, and play other sounds except the voice in the current environmental noise based on the sound volume value of the other noises.
Step 506c1, determining a target volume value and/or an environmental sound processing strategy under the current environmental noise based on the pre-stored manually adjusted volume values of different environmental noises in the dialogue state and a second result of whether to turn on the pass-through mode.
In one possible implementation, if the terminal determines that the pass-through mode is not turned on under different environmental noises similar to the current environmental noise in the dialogue state, only the target volume value under the current noise is determined.
In one example, the terminal may determine the dialogue state and the target volume value of the unopened pass-through mode under the current environmental noise based on the above method of determining the target volume value.
In one possible implementation, if the terminal determines that the different environmental noises in the dialogue state are similar to the current environmental noise, the terminal starts the pass-through mode, and determines the target volume value and the environmental sound processing strategy under the current noise.
In one example, the terminal may determine the target volume value when the pass-through mode is turned on under the dialog state and the current ambient noise based on the above-described method of determining the target volume value.
In one example, the terminal may determine the dialog state and the environment sound processing policy of the un-paused playback under the current environment noise based on the method described above in step 506b 2.
See above for details.
In one possible implementation manner, if the terminal determines that the earphone keeps the volume value unchanged under the environmental noise similar to the current environmental noise in different environmental noise in the dialogue state, determining an environmental sound processing strategy, and taking the current volume value of the earphone as a target volume value.
In one example, the terminal may determine the dialogue state and the environmental sound processing policy under the current environmental noise that keeps the earphone volume unchanged based on the method described in step 506b 2.
Step 506c2, the earphone is controlled to play the target audio according to the target audio value and/or play the environmental noise processed according to the environmental sound processing strategy, and step 507 is executed.
Illustratively, the earphone is controlled to play the target audio according to the target volume value and/or the earphone is controlled to play the environmental noise processed according to the environmental sound processing strategy.
For example, the earphone is controlled to play the environmental noise processed according to the environmental sound processing strategy, and the volume of the target audio is not changed.
For example, under the condition that the terminal can collect the sound of the surrounding environment more accurately than the earphone, the terminal processes and determines the current environmental noise based on the environmental sound processing strategy, and sends the processed current environmental noise to the earphone for playing.
Step 507, when the manual volume adjustment is detected, determining a current manually adjusted volume value; the current manually adjusted volume value is stored as the manually adjusted volume value at the current state of the user and the current ambient noise.
When the user adjusts the volume, the manually adjusted volume value which is already set under the current state and the current environmental noise is refreshed, so that the use habit of the user is ensured to be met in real time.
In addition, a first result of whether to start the active noise reduction mode, a second result of whether to start the pass-through mode, and a third result of whether to pause playing under the current state and the current environmental noise are also required to be updated in real time.
Next, an earphone control method provided in the embodiment of the present application is described based on the above-described scheme of performing earphone control by the terminal in the earphone control system shown in fig. 1. It will be appreciated that some or all of this method may be found in the description above of a scheme for earphone control of a terminal in an earphone control system as shown in fig. 1.
Fig. 6 is a flowchart of a headset control method according to an embodiment of the present application. It will be appreciated that the method may be performed by any system comprising a terminal and an earpiece, wherein the terminal and the earpiece are connected and the earpiece is in a worn state. As shown in fig. 6, the earphone control method includes:
Step 601, the terminal stores the volume values manually adjusted under different environmental noises when the user is in a conversational state and/or a non-conversational state.
In one possible implementation, the terminal may store the last manually adjusted volume value for the dialog state under different noises and/or the last manually adjusted volume value for the non-dialog state under different noises.
Step 602, the terminal determines the current environmental noise.
In one possible implementation manner, the terminal controls the microphone to collect the sound of the surrounding environment, so as to obtain the current environmental noise, and further obtain the current intensity value of the environmental noise and the voice information.
Step 603, the terminal determines the current state of the user.
In one possible implementation manner, the terminal determines that the current state of the user is a dialogue state when it is determined that the current environmental noise contains the sound of the user and the sound of other users, the user does not follow or sing, and the earphone is not in the dialogue state.
In one possible implementation manner, the terminal determines that the current state of the user is a non-dialogue state when it is determined that the current ambient noise does not exist in the voice of the user and the earphone is not in the dialogue state.
Details are described above, and will not be repeated here.
Step 604, the terminal determines the currently matched volume value according to the current state and the current environmental noise.
In one possible implementation, the terminal uses the stored current state and the manually adjusted volume value under the current intensity value of the environmental noise as the currently matched volume value. For example, the terminal stores a volume value A2 with an intensity value d of the environmental noise in the non-talking state, and if the current state is the non-talking state and the intensity value d of the current environmental noise, the currently matched volume value is A2.
Further, in the case that the terminal does not obtain the currently matched volume value according to the current state and the current environmental noise.
The current state is a dialog state:
in one possible implementation, the terminal stores a third result of whether to pause the playback under different environmental noise in the dialogue state.
In one example, the terminal pauses the playing of the earphone under the condition that the third result of determining the current matching according to the current state and the current environmental noise is yes. Further, the earphone is controlled to play the speaking content of the current noise environment according to the method described in the step 506b2 and the step 506b 3. Otherwise, the earphone is controlled to play the speaking content and/or the target audio of the current noise environment according to the method described in the step 506c1 and the step 506c 2.
The current state is a non-talk state and the headset is controlled as described in step 506a above.
In one possible implementation manner, the terminal may determine that, among different environmental noises in the current state, environmental noises similar to the current environmental noise, for example, the difference between the intensity values is the smallest and not greater than a preset threshold, and control the playing of the earphone based on the playing policy of the environmental noise in the current state, so as to satisfy the habit of the user. The information content of the playing strategy is referred to above, and will not be described in detail.
Step 605, the earphone plays according to the currently matched volume value.
As a possible implementation, the terminal stores a first result of whether or not to actively reduce noise under different environmental noise in a non-conversational state. Correspondingly, the terminal controls the earphone to actively reduce noise under the condition that the first result of the current matching is yes according to the current state and the current environmental noise.
For example, in the non-dialogue state, the terminal controls the earphone to play according to the currently matched volume value, and starts the noise reduction mode. Details are described above, and will not be repeated here.
In one example, the terminal stores a first result of whether to actively reduce noise in a non-dialogue state at each of the different ambient noise manually adjusted volume values, i.e., a first result of whether to actively reduce noise in the last manually adjusted volume.
As a possible implementation, the terminal stores a second result of whether or not to play the ambient noise under different ambient noise in the dialogue state. Correspondingly, the terminal controls the earphone to play the current sound of the environmental noise under the condition that the second result of the current matching is yes according to the current state and the current environmental noise.
For example, in the non-dialogue state, the terminal controls the earphone to play according to the currently matched volume value, and the pass-through mode is started. Details are described above, and will not be repeated here.
In one example, the terminal stores a second result of whether the dialog state plays the environmental noise at the respective manually adjusted volume values of the different environmental noise, i.e., whether the environmental noise was played the last time the volume was manually adjusted.
Therefore, in the embodiment, the volume of the earphone is adjusted through the historical volume setting of the user, so that the expectation of the user on volume adjustment can be better met, and the user does not need to manually and repeatedly adjust the volume setting to adapt to the current environment. In addition, if the user needs to adjust the volume, the volume of the earphone is not required to be adjusted repeatedly because the volume is adjusted based on the latest volume setting of the user, so that the volume of the earphone can be adjusted rapidly.
Based on the method in the above embodiment, the embodiment of the present application provides an earphone control device. The apparatus may be used to implement the methods provided in the method embodiments described above. The detailed description of the operations performed by the earphone control device in the above-mentioned various possible designs may refer to the description in the embodiments of the method provided in this embodiment, and the possible hardware designs may refer to the description in the embodiments of the terminal 110 provided in this embodiment, which will not be described in detail herein.
Based on the device in the above embodiment, the embodiment of the present application further provides an electronic device, which includes the earphone control device provided in the above embodiment.
It is to be appreciated that the processor in embodiments of the present application may be a central processing unit (central processing unit, CPU), but may also be other general purpose processors, digital signal processors (digital signal processor, DSP), application specific integrated circuits (application specific integrated circuit, ASIC), field programmable gate arrays (field programmable gate array, FPGA) or other programmable logic devices, transistor logic devices, hardware components, or any combination thereof. The general purpose processor may be a microprocessor, but in the alternative, it may be any conventional processor.
The method steps in the embodiments of the present application may be implemented by hardware, or may be implemented by a processor executing software instructions. The software instructions may be comprised of corresponding software modules that may be stored in random access memory (random access memory, RAM), flash memory, read-only memory (ROM), programmable ROM (PROM), erasable programmable PROM (EPROM), electrically erasable programmable EPROM (EEPROM), registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the present application, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in or transmitted across a computer-readable storage medium. The computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital Subscriber Line (DSL)), or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., a floppy disk, a hard disk, a magnetic tape), an optical medium (e.g., a DVD), or a semiconductor medium (e.g., a Solid State Disk (SSD)), or the like.
It will be appreciated that the various numerical numbers referred to in the embodiments of the present application are merely for ease of description and are not intended to limit the scope of the embodiments of the present application.

Claims (21)

1. A headset control method, applied to a system comprising a terminal and a headset, wherein the terminal is connected with the headset, and the headset is in a wearing state, the method comprising:
the terminal stores the volume value manually adjusted under different environmental noises when the user is in a dialogue state and/or a non-dialogue state;
the terminal determines the current environmental noise;
the terminal determines the current state of the user;
the terminal determines the currently matched volume value according to the current state and the current environmental noise;
and the earphone plays according to the currently matched volume value.
2. The method according to claim 1, wherein the method further comprises:
the terminal stores first identification information of the voice of the user;
the terminal determines the voice information of the current environmental noise;
and the terminal determines the current state of the user based on the voice information of the current environmental noise and the first identification information.
3. The method of claim 2, wherein the terminal determining the current state of the user based on the voice information of the current ambient noise and the first identification information comprises:
the terminal stores second identification information of the audio currently played by the earphone;
the terminal determines a current state of the user based on the voice information of the current environmental noise, the first identification information and the second identification information.
4. The method according to claim 1, wherein the method further comprises:
the terminal stores a first result of whether the user actively reduces noise or not in different environment noises in a non-dialogue state;
the terminal controls the earphone to actively reduce noise under the condition that the first result of the current matching is determined to be yes according to the current state and the current environmental noise;
or, the method further comprises:
the terminal stores a second result of whether the user plays the environmental noise or not in each of different environmental noises in a dialogue state;
and the terminal controls the earphone to play the current environmental noise under the condition that the second result of the current matching is yes according to the current state and the current environmental noise.
5. The method of claim 1, wherein the terminal does not match the volume value based on the current state and the current ambient noise; the method further comprises the steps of:
when the current state is the non-dialogue state, the volume of the earphone is kept unchanged, active noise reduction is carried out or the earphone is played according to a target volume value; the target volume value is determined by the terminal based on the current environmental noise, the current state and the manually adjusted volume value of each of different environmental noises of which the user is in an un-conversational state;
in the current state being the dialogue state, the earphone pauses playing the target audio, reduces the volume of the target audio or plays the current environmental noise; wherein the target audio does not include the current ambient noise; or alternatively, the first and second heat exchangers may be,
and the terminal stores a third result of whether the user pauses the playing or not in each of different environmental noises in a dialogue state, and controls the earphone to pause the playing under the condition that the third result matched with the current state and the current environmental noise is yes.
6. The method of claim 1, wherein the headset is provided with a conversation-prompting light, the method further comprising:
The terminal determines the state of the user as the dialogue state;
and the dialogue prompt lamp of the earphone enters a prompt state.
7. The method according to claim 1, wherein the method further comprises:
when the terminal detects that the volume is manually adjusted, determining the current manually adjusted volume value;
the terminal stores the current manually adjusted volume value as a manually adjusted volume value under the current state of the user and the current ambient noise.
8. The earphone control method is characterized by being applied to a terminal, wherein the terminal is connected with an earphone, the earphone is in a wearing state, and the method comprises the following steps:
storing the volume value manually adjusted under different environmental noises when the user is in a dialogue state and/or a non-dialogue state;
determining current ambient noise;
determining a current state of the user;
and determining the currently matched volume value according to the current state and the current environmental noise so that the earphone plays according to the currently matched volume value.
9. The method of claim 8, wherein the determining the current state of the user comprises:
Storing first identification information of the user's voice;
determining the voice information of the current environmental noise;
and determining the current state of the user based on the voice information of the current environmental noise and the first identification information.
10. The method of claim 9, wherein the determining the current state of the user based on the voice information of the current ambient noise and the first identification information comprises:
storing second identification information of the audio currently played by the earphone;
and determining the current state of the user based on the voice information of the current environmental noise, the first identification information and the second identification information.
11. The method of claim 8, wherein the method further comprises:
storing a first result of whether the user actively reduces noise or not for different environmental noises in a non-dialogue state;
controlling the earphone to actively reduce noise under the condition that the first result of current matching is determined to be yes according to the current state and the current environmental noise; or alternatively, the process may be performed,
the method further comprises the steps of:
storing a second result of whether the user plays the environmental noise or not in each of different environmental noises in a dialogue state;
And controlling the earphone to play the current environmental noise under the condition that the second result of the current matching is yes according to the current state and the current environmental noise.
12. The method of claim 8, wherein the volume value is not matched based on the current state and the current ambient noise, the method further comprising:
when the current state is the non-dialogue state, controlling the volume of the earphone to be unchanged, actively reducing noise or playing according to a target volume value; the target volume value is determined by the terminal based on the current environmental noise, the current state and the manually adjusted volume value of each of different environmental noises of which the user is in an un-conversational state;
controlling the earphone to pause playing the target audio, reduce the volume of the target audio or play the current environmental noise when the current state is the dialogue state; wherein the target audio does not include the current ambient noise; or alternatively, the first and second heat exchangers may be,
and storing a third result of whether the user pauses the playing or not in each of different environmental noises in a dialogue state, and controlling the earphone to pause the playing under the condition that the third result matched with the current state and the current environmental noise is yes.
13. The method of claim 8, wherein the method further comprises:
when detecting that the user manually adjusts the volume, determining the current manually adjusted volume value;
and storing the current manually adjusted volume value as the manually adjusted volume value under the current state of the user and the current environmental noise.
14. The earphone control method is characterized by being applied to an earphone, wherein the earphone is connected with a terminal, the earphone is in a wearing state, and the terminal stores volume values which are manually adjusted under different environmental noises when the user is in a conversation state and/or a non-conversation state under the condition that the earphone is in the wearing state; the method comprises the following steps:
playing according to the currently matched volume value; the currently matched volume value is determined based on the determined current state of the user and current environmental noise under the condition that the terminal wears the earphone by the user.
15. The method of claim 14, wherein the earphone is provided with a dialog prompt; the method further comprises the steps of:
and determining the current state of the user as the dialogue state at the terminal, and controlling the dialogue prompting unit to be in a prompting state.
16. The method of claim 14, wherein the terminal stores a first result of whether the user is actively making a noise reduction for each of the different environmental noises in a non-conversational state and/or a second result of whether the user is playing the environmental noise for each of the different environmental noises in a conversational state, the method further comprising:
under the condition that the terminal determines that the first result of current matching is yes according to the current state and the current environmental noise, actively reducing noise;
and playing the speaking content in the current environmental noise under the condition that the terminal determines that the second result of the current matching is yes according to the current state and the current environmental noise.
17. The method of claim 14, wherein the terminal does not match the volume value based on the current state and the current ambient noise; the method further comprises the steps of:
when the current state is the non-dialogue state, the volume is kept unchanged, active noise reduction is carried out or the sound is played according to a target volume value; the target volume value is determined by the terminal based on the current environmental noise, the current state and the volume value manually adjusted under different environmental noise when the user is in a dialogue state and/or a non-dialogue state;
Suspending playing the target audio, reducing the volume of the target audio or playing the current environmental noise when the current state is the dialogue state; wherein the target audio does not include the current ambient noise;
and storing a third result of whether the user pauses playing or not in each of different environmental noises in a dialogue state by the terminal, and pausing playing under the condition that the third result matched with the current state and the current environmental noise is yes.
18. A headset control system, comprising: a terminal and an earphone; wherein the system is adapted to perform the method of any of claims 1 to 7.
19. An earphone control device, characterized by comprising:
at least one memory for storing a program;
at least one processor for executing the memory-stored program, which processor is adapted to perform the method according to any of claims 8-13 or to perform the method according to any of claims 14-17, when the memory-stored program is executed.
20. A headset control device, characterized in that the device runs computer program instructions to perform the method according to any of claims 8-13 or to perform the method according to any of claims 14-17.
21. A computer storage medium having instructions stored therein which, when executed on a computer, cause the computer to perform the method of any of claims 8-13 or to perform the method of any of claims 14-17.
CN202111532671.9A 2021-12-15 2021-12-15 Earphone control method, device and system and computer readable storage medium Pending CN116264655A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111532671.9A CN116264655A (en) 2021-12-15 2021-12-15 Earphone control method, device and system and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111532671.9A CN116264655A (en) 2021-12-15 2021-12-15 Earphone control method, device and system and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN116264655A true CN116264655A (en) 2023-06-16

Family

ID=86723537

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111532671.9A Pending CN116264655A (en) 2021-12-15 2021-12-15 Earphone control method, device and system and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN116264655A (en)

Similar Documents

Publication Publication Date Title
US10410634B2 (en) Ear-borne audio device conversation recording and compressed data transmission
US10582312B2 (en) Hearing aid and a method for audio streaming
WO2021008614A1 (en) Method for establishing communication connection and wearable device
US9875753B2 (en) Hearing aid and a method for improving speech intelligibility of an audio signal
CN113169760B (en) Wireless short-distance audio sharing method and electronic equipment
CN113794797B (en) Terminal equipment and method for picking up sound through Bluetooth peripheral
US9826303B2 (en) Portable terminal and portable terminal system
CN113542960B (en) Audio signal processing method, system, device, electronic equipment and storage medium
US20230091607A1 (en) Psychoacoustics-based audio encoding method and apparatus
CN114466097A (en) Mobile terminal capable of preventing sound leakage and sound output method of mobile terminal
CN113301544B (en) Method and equipment for voice intercommunication between audio equipment
CN116795753A (en) Audio data transmission processing method and electronic equipment
CN114125616B (en) Low-power consumption method and device of wireless earphone, wireless earphone and readable storage medium
CN115835079B (en) Transparent transmission mode switching method and switching device
CN116264655A (en) Earphone control method, device and system and computer readable storage medium
CN113709906B (en) Wireless audio system, wireless communication method and equipment
CN114667744B (en) Real-time communication method, device and system
CN113407076A (en) Method for starting application and electronic equipment
CN115175159B (en) Bluetooth headset playing method and equipment
CN116546126B (en) Noise suppression method and electronic equipment
CN114696961B (en) Multimedia data transmission method and equipment
CN117093182B (en) Audio playing method, electronic equipment and computer readable storage medium
CN117061949B (en) Earphone volume adjusting method and electronic equipment
WO2023160214A1 (en) Bluetooth earphone, audio output method and audio output system
CN117690423A (en) Man-machine interaction method and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination