CN117751585A - Control method and device of intelligent earphone, electronic equipment and storage medium - Google Patents

Control method and device of intelligent earphone, electronic equipment and storage medium Download PDF

Info

Publication number
CN117751585A
CN117751585A CN202280004138.1A CN202280004138A CN117751585A CN 117751585 A CN117751585 A CN 117751585A CN 202280004138 A CN202280004138 A CN 202280004138A CN 117751585 A CN117751585 A CN 117751585A
Authority
CN
China
Prior art keywords
user
intelligent earphone
data
voice
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202280004138.1A
Other languages
Chinese (zh)
Inventor
彭聪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Publication of CN117751585A publication Critical patent/CN117751585A/en
Pending legal-status Critical Current

Links

Abstract

A control method, a device, an electronic device and a storage medium of an intelligent earphone, wherein the method comprises the following steps: and acquiring audio data (101) played on the terminal equipment, identifying the audio data to determine that the terminal equipment is in a target working scene (102), enabling a microphone of the intelligent earphone to be in a voice acquisition state in the target working scene, acquiring a mode switching instruction (103) of the intelligent earphone in the target working scene, and controlling the intelligent earphone to switch between at least two voice modes according to the mode switching instruction (104). Under the target working scene, the intelligent earphone is controlled to be switched between at least two modes, so that a user can be helped to use the intelligent earphone without frequently picking off or taking the intelligent earphone, and the use convenience of the intelligent earphone is improved.

Description

Control method and device of intelligent earphone, electronic equipment and storage medium Technical Field
The present disclosure relates to the field of computer application technologies, and in particular, to a control method and apparatus for an intelligent earphone, an electronic device, and a storage medium.
Background
The terminal equipment is matched with the intelligent earphone in the use process, so that convenience is brought to daily use of people, and the intelligent earphone with the noise reduction function can isolate surrounding noise, and is increasingly popular in the current use.
When the user uses the intelligent earphone with the noise reduction function, the user needs to communicate with surrounding people, and at the moment, the user needs to frequently take off the intelligent earphone so as to communicate with the surrounding people, and then wears the earphone, so that the use convenience of the intelligent earphone is reduced.
Disclosure of Invention
The application provides a control method, a control device, electronic equipment and a storage medium of an intelligent earphone, so as to improve the convenience of the intelligent earphone.
An embodiment of an aspect of the present application provides a method for controlling an intelligent earphone, including:
acquiring audio data played on the terminal equipment;
identifying the audio data to determine that the terminal equipment is in a target working scene; the microphone of the intelligent earphone is in a voice acquisition state under the target working scene
Acquiring a mode switching instruction of the intelligent earphone under the target working scene;
and controlling the intelligent earphone to switch between the at least two voice modes according to the mode switching instruction.
An embodiment of another aspect of the present application provides a control device for an intelligent earphone, including:
the acquisition module is used for acquiring the audio data played on the terminal equipment;
The first determining module is used for identifying the audio data to determine that the terminal equipment is in a target working scene; in the target working scene, a microphone of the intelligent earphone is in a voice acquisition state;
the acquisition module is further used for acquiring a mode switching instruction of the intelligent earphone under the target working scene;
and the control module is used for controlling the intelligent earphone to switch between the at least two voice modes according to the mode switching instruction.
Another embodiment of the present application proposes an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, said processor implementing the method according to the previous aspect when executing said program.
Another aspect of the present application provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements a method according to the previous aspect.
Another embodiment of the present application proposes a computer program product having a computer program stored thereon, which, when being executed by a processor, implements a method according to the previous aspect.
According to the control method, the device, the electronic equipment and the storage medium of the intelligent earphone, the audio data played on the terminal equipment are acquired, the audio data are identified to determine that the terminal equipment is in a target working scene, the microphone of the intelligent earphone is in a voice acquisition state in the target working scene, the mode switching instruction of the intelligent earphone is acquired in the target working scene, the intelligent earphone is controlled to be switched between at least two voice modes according to the mode switching instruction, and the intelligent earphone is controlled to be switched between at least two modes in the target working scene, so that a user does not need to frequently pick up or take the intelligent earphone when using the intelligent earphone, and the use convenience of the intelligent earphone is improved.
Additional aspects and advantages of the application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the application.
Drawings
The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings, in which:
fig. 1 is a flow chart of a control method of an intelligent earphone according to an embodiment of the present application;
Fig. 2 is a flow chart of another control method of the smart earphone according to the embodiment of the present application;
fig. 3 is a flow chart of another control method of the smart earphone according to the embodiment of the present application;
fig. 4 is a flowchart of another control method of the smart earphone according to the embodiment of the present application;
fig. 5 is a schematic structural diagram of a control device of an intelligent earphone according to an embodiment of the present application;
fig. 6 is a block diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Embodiments of the present application are described in detail below, examples of which are illustrated in the accompanying drawings, wherein the same or similar reference numerals refer to the same or similar elements or elements having the same or similar functions throughout. The embodiments described below by referring to the drawings are exemplary and intended for the purpose of explaining the present application and are not to be construed as limiting the present application.
Control methods, devices, electronic equipment and storage media of the intelligent earphone according to the embodiments of the present application are described below with reference to the accompanying drawings.
Fig. 1 is a flow chart of a control method of an intelligent earphone according to an embodiment of the present application.
The execution main body of the control method of the intelligent earphone is a control device of the intelligent earphone, the device can be arranged in electronic equipment, the electronic equipment can be the intelligent earphone, and the intelligent earphone is a noise reduction earphone with a noise reduction function.
As shown in fig. 1, the method may include the steps of:
step 101, obtaining audio data played on a terminal device.
The terminal device may be a smart phone, a palm computer, an intelligent wearable device, a computer, or the like, which is not limited in this embodiment.
In an example of this embodiment of the present application, the intelligent earphone is a bluetooth earphone with a noise reduction function, the intelligent earphone and the terminal device are connected through bluetooth, after connection is established, the terminal device sends the audio data that is played to the intelligent earphone according to a set frequency, for example, the audio data is sent in real time, or is sent once every 200ms, where the audio data may be voice data, audio-visual entertainment data, and the like of a person.
And 102, identifying the audio data to determine that the terminal equipment is in the target working scene.
Under the target working scene, the microphone of the intelligent earphone is in a voice acquisition state, namely when the terminal equipment is in the target working scene, the microphone of the intelligent earphone is in an on state, and the microphone can be used for acquiring voice data of the environment.
In the embodiment of the application, the audio data are identified to identify whether the audio data belong to audio-visual entertainment data or voice data of a person. Wherein, the audio-visual entertainment data, such as music, films, short videos, etc., all contain background music, if the audio data identifies voice data belonging to a person in a conference call scene, or voice data of a person in an instant chat software, etc., in such a scene, the voice data of a user using the smart headset needs to be transmitted to the user of other terminal devices participating in the conference call or the instant chat, so that a microphone on the smart headset needs to be turned on to collect the voice data of the user using the smart headset, the terminal device is determined to be in a target working scene; if the audio data is identified as audio-visual entertainment data, such as music listened to on an application program, video watched, etc., in this scenario, the intelligent earphone is used to transfer the audio data played by the terminal device to the human ear through the earphone, while the microphone on the intelligent earphone is not turned on, and the sound data or the environment data of the user using the intelligent earphone do not need to be collected.
In one implementation manner of the embodiment of the present application, audio data with a set duration is obtained, the audio data is identified, a voice frequency band included in the audio data is determined, and in response to the voice frequency band including a first target voice frequency band and not including a second target voice frequency band and a third target voice frequency band, it is determined that the terminal device is in a target working scene. The frequency lower limit of the first target voice frequency band is larger than the frequency upper limit of the second target voice frequency band, and the frequency upper limit of the first target voice frequency band is smaller than the frequency lower limit of the third target voice frequency band. As an example, the voice frequency band includes a low frequency band, an intermediate frequency band and a Gao Pinpin band, the first target voice frequency band is the intermediate frequency band, the second target voice frequency band is the low frequency band, the third target voice frequency band is the high frequency band, the low frequency band and the high frequency band are less in the voice data of the user, that is to say, the frequency of the voice data of the user mainly belongs to the intermediate frequency band, and the audio-video entertainment data includes more data of the low frequency band and data of the high frequency band, so that by identifying the frequency band included in the audio data, whether the audio data is voice data of a person or audio-video entertainment data can be determined, and in the case that the audio data is identified as voice data of the person, the terminal device is determined to be in the target working scene.
In another implementation manner of the embodiment of the present application, by identifying the audio data, if it is identified that the ratio of the first target audio frequency band included in the audio data in the entire audio frequency band is greater than the set threshold, that is, the ratio of the second target audio frequency band and/or the third target audio frequency band in the entire audio frequency band is lower than the set threshold, the audio data is considered to be sound data of the user, that is, it is determined that the terminal device is in the target working scene.
It should be noted that, in order to improve accuracy of recognition, audio data with a plurality of set durations may be collected for recognition, and under the condition that speech frequency bands included in the audio data with the plurality of set durations all meet the above requirements, it is determined that the terminal device is in the target working scene.
In still another implementation manner of the embodiment of the present application, source information carried by audio data is identified, a source of the audio data is determined, in response to the source of the audio data being a target source, it is determined that the terminal device is in a target working scene, specifically, the source information carried by the audio data may be identified, where the source information indicates application software to which the audio data belongs, and if the application to which the audio data belongs is identified as a target application program, it is determined that the terminal device is in the target working scene. For example, if the source information indicates that the pinyin data belongs to an application program of audio-visual entertainment and is not a call application program, for example, a teleconference application program, then the terminal device is determined not to belong to the target working scene, otherwise, the terminal device is determined to belong to the target working scene.
Step 103, acquiring a mode switching instruction of the intelligent earphone under a target working scene.
In this embodiment of the present invention, in a target working scenario, the smart headset has at least two voice modes, and the at least two voice modes can be switched by a mode switching instruction, where the target working scenario is a scenario where a plurality of users communicate, for example, a teleconference scenario, a voice chat scenario, and the like.
In one implementation manner of the embodiment of the present application, when it is monitored that the terminal device is in the target working scenario, the function mode corresponding to the setting key of the smart headset may be changed, for example, the function mode includes a play and pause function in the non-target working scenario by the setting key a, and in the target working scenario, the function mode is switched to include at least two voice modes. Therefore, in the target working mode, the mode switching instruction of the intelligent earphone can be obtained in response to the pressing operation of the setting key of the intelligent earphone by the user, and the mode switching instruction is used for switching the intelligent earphone to the voice mode indicated by the mode switching instruction. For example, the smart earphone includes two voice modes, namely a noise reduction mode and an alternating current mode, in the noise reduction mode, external sound is subjected to noise reduction processing, that is, after the noise reduction processing, the external sound is smaller than a set volume; in the ac mode, the external sound is not noise-reduced but the original sound can be maintained or amplified so that the external sound is greater than the set volume. If the user presses the setting key, determining that the voice mode of the intelligent earphone is monitored before the user presses the setting key, responding to the operation of the key to generate an alternating current mode switching instruction if the intelligent earphone is in a noise reduction mode, and similarly responding to the operation of the key to generate the noise reduction mode switching instruction if the intelligent earphone is in the alternating current mode.
In another implementation manner of the embodiment of the present application, when it is monitored that the terminal device is in the target working scene, the function mode corresponding to the intelligent earphone may be changed by using the mode switching instruction corresponding to the acquired voice recognition, specifically, the acquired voice signal may be recognized to identify a keyword included in the acquired voice signal, and the corresponding mode switching instruction is determined based on the keyword, where the mode switching instruction is used to switch the intelligent earphone to the voice mode indicated by the mode switching instruction. The voice mode of the smart headset may refer to the description in the previous implementation, and will not be described herein.
Step 104, according to the mode switching instruction, the intelligent earphone is controlled to switch between at least two voice modes.
Furthermore, according to the mode switching instruction, the intelligent earphone is controlled to switch between at least two voice modes, so that the intelligent earphone can be switched between at least two voice modes under the condition of wearing the intelligent earphone, the efficiency of switching the voice modes is improved, and the convenience of using the intelligent earphone by a user is improved.
In this embodiment, two voice modes, namely, a noise reduction mode and an communication mode are taken as an example for explanation, in one scene, the intelligent earphone is in the noise reduction mode, that is, a user using the intelligent earphone performs voice communication through a client program, and then the user switches to the communication mode according to a mode switching instruction, in the communication mode, the voice of the voice communication is reduced to be smaller than the set volume, so that the user cannot influence the acquisition of external voice due to the voice of audio data played in the equipment on the terminal under the condition of wearing the intelligent earphone; in another scenario, when the intelligent earphone is in the communication mode, after the mode switching instruction is detected, the intelligent earphone is switched from the communication mode to the noise reduction mode, and in the noise reduction mode, external sound is subjected to noise reduction processing, so that the volume of the external sound heard by a user is smaller than the set volume. Under the target working scene, the intelligent earphone is controlled to be switched between the two modes, so that a user can smoothly communicate and meet without frequently picking off or taking the intelligent earphone in the process of using the intelligent earphone, and the use convenience of the intelligent earphone is improved.
According to the control method of the intelligent earphone, the audio data played on the terminal equipment are acquired, the audio data are identified to determine that the terminal equipment is in a target working scene, the microphone of the intelligent earphone is in a voice acquisition state in the target working scene, the mode switching instruction of the intelligent earphone is acquired in the target working scene, the intelligent earphone is controlled to switch between at least two voice modes according to the mode switching instruction, and the intelligent earphone is controlled to switch between at least two modes in the target working scene, so that a user can be helped to use the intelligent earphone without frequently picking off or taking the intelligent earphone, and the use convenience of the intelligent earphone is improved.
Based on the above embodiment, fig. 2 is a flowchart of another control method of an intelligent earphone provided in the embodiment of the present application, where in this embodiment, a voice mode includes a noise reduction mode and an ac mode, and specifically illustrates that after the intelligent earphone is switched to the noise reduction mode, the intelligent earphone performs noise reduction processing on acquired environmental voice data to avoid the influence of voice data on a target mode, as shown in fig. 2, the method includes the following steps:
Step 201, obtaining audio data played on a terminal device.
Step 202, identifying the audio data to determine that the terminal device is in the target working scene.
The microphone of the intelligent earphone is in a voice acquisition state in the target working scene.
Step 203, acquiring a mode switching instruction of the intelligent earphone in a target working scene.
Step 204, according to the mode switching instruction, the intelligent earphone is controlled to switch between at least two voice modes.
In step 201 to step 204, the same principle can be referred to in the explanation of the foregoing embodiments, and the description is omitted here.
And step 205, in response to controlling the intelligent earphone to switch to the noise reduction mode, acquiring audio data played on the terminal equipment.
It should be noted that, the execution of steps 205 to 208 after step 204 is only an example, that is, the execution timing of steps 205 to 208 is not limited.
In the embodiment of the application, in response to controlling the intelligent earphone to switch to the noise reduction mode according to the mode switching instruction, audio data played on the terminal equipment are acquired, the audio data are voice data acquired by other terminal equipment, users of other terminal equipment and the terminal equipment are both in a target working scene, and the voice data comprise voice data of people and are called voice data of a first user for convenience of distinction.
For example, the target working scene is a teleconference scene, in which 3 users participate in the teleconference, namely user 1, user 2 and user 3, wherein user 1 is a user corresponding to a terminal device, user 2 and user 3 are corresponding to other terminal devices, and the acquired audio data played on the terminal device includes sound data of two first users, namely, sound data of user 2 and user 3.
At step 206, the audio data is identified to identify the sound data of the first user included in the audio data.
In an implementation manner of the embodiment of the present invention, the audio data is identified, so as to identify the sound frequency bands corresponding to different voices of the audio data, and because the voice frequencies of different people are different, different voice audio segments corresponding to the voices of different people or different frequency spectrum features of different voices can be identified, that is, the sound data of the first user included in the audio data can be identified, where the number of the first user obtained by identification can be one or more, the number of the people involved in the target working scene is high, and the number of the people involved in the target working scene is high.
In another implementation manner of the embodiment of the present application, feature recognition such as tone color may be performed on the audio data based on the speech recognition model obtained through training, so as to identify and obtain sound data of the first user included in the audio data.
For example, in a teleconference scenario, where 3 users are participating in a teleconference, the acquired audio data identified by the acquired audio data includes sound data of two first users, which may be referred to as sound data of first user a and sound data of first user B.
Step 207, acquiring first environmental voice data acquired by a microphone on the smart phone.
In this embodiment of the present application, the microphone on the intelligent earphone is in a start state, and can collect environmental voice data in real time, so as to distinguish with the environmental voice data collected in other modes, called first environmental voice data, and send to the intelligent earphone.
Step 208, performing noise reduction processing on the first environmental voice data according to the voice data of the first user.
In this embodiment, the first environmental voice data may be identified to identify the voice data of the person included in the first environmental voice data, where the method for identifying the first environmental voice data to obtain the voice data of the person may refer to the explanation in the foregoing steps, and the principle is the same and will not be repeated here.
As an implementation manner, according to the sound frequency band of the sound data of the first user, the sound volume of the sound frequency band of the sound data of the first user, which does not belong to the first environment sound data, is reduced or directly deleted, so as to perform noise reduction processing on the first environment sound data, and reduce the influence of the environment sound data on the sound data played on the terminal device.
As another implementation manner, the sound data of the second user using the intelligent earphone is obtained, the noise reduction processing is performed on the sound data of the first user and the sound data of the second user other than the sound data of the first user in the first environment sound data according to the sound data of the first user and the sound data of the second user, wherein the noise reduction level of the noise reduction processing can be set according to requirements.
In the control method of the intelligent earphone, the intelligent earphone is controlled to be switched to the noise reduction mode, the audio data played on the terminal equipment are acquired, the audio data are identified, the sound data of the first user included in the audio data are obtained through identification, the environment voice data collected by the microphone on the intelligent earphone are acquired, noise reduction processing is conducted on the environment voice data according to the sound data of the first user, the influence of the environment voice data on the voice data played on the terminal equipment is reduced, and the use convenience of the intelligent earphone is improved.
Based on the above embodiment, fig. 3 is a flowchart of another control method of an intelligent earphone provided in the embodiment of the present application, where in this embodiment, a voice mode including a noise reduction mode and an communication mode is taken as an example to describe, specifically how the intelligent earphone obtains sound data of a third user outside a second user using the intelligent earphone in environmental voice data after the intelligent earphone is switched to the communication mode, so as to implement communication without picking up the intelligent earphone. As shown in fig. 3, the method comprises the steps of:
step 301, obtaining audio data played on a terminal device.
In step 302, the audio data is identified to determine that the terminal device is in the target working scene.
The microphone of the intelligent earphone is in a voice acquisition state in the target working scene.
Step 303, acquiring a mode switching instruction of the intelligent earphone in a target working scene.
Step 304, according to the mode switching instruction, the intelligent earphone is controlled to switch between at least two voice modes.
The principles of steps 301 to 304 may be the same as those of the previous embodiments, and are not repeated here.
In step 305, in response to controlling the smart headset to switch to the communication mode according to the mode switching instruction, a first sound orientation of a second user using the smart headset is obtained.
It should be noted that, the execution of steps 305 to 307 after step 304 is merely an example, and steps 305 to 307 may also be executed before step 304, that is, the execution timing of steps 305 to 307 is not limited in this embodiment.
In this embodiment of the present application, when the smart headset is in the communication mode, the second user using the smart headset may communicate with surrounding users. As an implementation manner, when the mobile phone is switched to the alternating current mode, a voice prompt can be sent to a second user using the intelligent earphone through the intelligent earphone to prompt the second user to speak the set sentence, so that the first sound direction corresponding to the second user when speaking the set sentence is determined. As another implementation manner, in the case of switching to the ac mode, the first sound orientation of the second user using the smart headset stored by the smart headset may be acquired, and since the wearing manner of the smart headset is generally fixed when the second user uses the smart headset, the first sound orientation of the smart headset collecting the sound data of the second user is also fixed, and thus the first sound orientation of the second user may be acquired from the storage unit of the smart headset.
At step 306, the microphone is controlled to collect sound data of a third user in the environment at a second sound location other than the first sound location of the second user.
The third user may be a user who is making a sound in the environment, may be a user who communicates with the second user, or may be another user who does not communicate with the second user, but is speaking. The second sound direction is the direction of the sound made by the third user in the environment.
In one implementation manner of the embodiment of the present application, the microphone is a microphone array, a first microphone used for collecting a first sound azimuth of a second user in the microphone array is controlled to stop collecting sound data of the second user, a second microphone except the first microphone in the microphone array is controlled to collect sound data of a third user in the environment in a second sound azimuth except the first sound azimuth of the second user, so that sound data collected by the microphone does not include sound data of the second user, and accuracy of sound collection in an alternating current mode is improved.
Alternatively, if there are a plurality of sound orientations other than the first sound orientation in the environment, the orientation with the greatest sound intensity may be regarded as the second sound orientation.
Step 307, playing the sound data of the third user by using the intelligent earphone.
In this embodiment of the application, send the third user's that the microphone gathered sound data to intelligent earphone, after intelligent earphone amplifies third user's sound data, broadcast third user's sound data, improved third user's sound data's volume size, improved the reliability that second user obtained exchanging user's sound data for second user does not need to pluck the earphone and also can clear the third user's of hearing exchanging sound, has improved the effect of exchanging.
Further, in response to controlling the smart headset to switch to the ac mode, the noise reduction mode is turned off and data transmission between the microphone and the terminal device is inhibited, and the inhibiting of data transmission between the microphone and the terminal device may be performed by the smart headset or by the terminal device, respectively, as described below.
In one implementation manner of the embodiment of the application, in response to controlling the intelligent earphone to switch to the communication mode, the noise reduction mode is closed so as to avoid noise reduction processing on the environmental voice data collected by the microphone.
Meanwhile, in order to avoid interference to a target working scene where the intelligent earphone is located, in the process that a second user using the earphone communicates with surrounding third users, data transmission between the microphone and the terminal equipment is prohibited, that is, the microphone is prohibited from sending collected sound data of the third user to the terminal equipment, so that the terminal equipment is prevented from playing the sound data of the third user in the target working scene, for example, a teleconference scene, that is, other clients participating in the teleconference are prevented from hearing communication sounds of the second user and the third user.
In another implementation manner of the embodiment of the present invention, when the intelligent earphone is switched to the communication mode according to the mode switching instruction, the user may be prompted to trigger a prohibition instruction in an interactive interface of the terminal device, so that the terminal device sends the prohibition instruction to the intelligent earphone through bluetooth, and the intelligent earphone prohibits data transmission between the microphone and the terminal device according to the obtained prohibition instruction sent by the terminal device, specifically, prohibits transmission of sound data of the third user, thereby avoiding transmission of voice content exchanged between the second user and surrounding third users to the terminal device, and avoiding the terminal device from playing the exchanged voice content in a target working scene, for example, a conference call scene, that is, avoiding other clients participating in the conference call from hearing the voice content exchanged between the second user and the third user.
In this embodiment, in response to controlling the intelligent earphone to switch to the communication mode, obtain the sound position of the second user who uses the intelligent earphone, control the microphone and gather the sound data of third user in the sound position outside the sound position of second user, adopt intelligent earphone broadcast third user's sound data, through not gathering the sound data of the second user who uses the intelligent earphone, but gather the sound data of third user who exchanges with the second user, so that the user does not take off intelligent earphone and also can smoothly exchange with the user around, the reliability of exchanging under the circumstances of wearing intelligent earphone has been improved.
Based on the above embodiment, fig. 4 is a flowchart of another control method of the intelligent earphone provided in the embodiment of the present application, which specifically illustrates how the intelligent earphone obtains sound data of a third user using a second user of the intelligent earphone in the environmental voice data after the intelligent earphone is switched to the communication mode, so as to implement communication without picking up the intelligent earphone. As shown in fig. 4, the method comprises the steps of:
step 401, obtaining audio data played on a terminal device.
In step 402, the audio data is identified to determine that the terminal device is in the target working scenario.
The microphone of the intelligent earphone is in a voice acquisition state in the target working scene.
Step 403, obtaining a mode switching instruction of the intelligent earphone in the target working scene.
Step 404, controlling the intelligent earphone to switch between at least two voice modes according to the mode switching instruction.
The principles of steps 401 to 404 may be the same as those described in the foregoing embodiments, and are not repeated here.
In step 405, in response to controlling the smart headset to switch to the ac mode, second ambient speech data collected by a microphone on the smart headset is obtained.
It should be noted that, steps 405 to 407 are merely examples, and the execution sequence of steps 405 to 407 is not limited in this embodiment.
The content of the second environmental voice data collected by the microphone on the smart headset may be the same or different from that of the first environmental voice data in the foregoing embodiment, and only the environmental voice data collected in different modes may be distinguished, which is referred to the explanation of the first environmental voice data in the foregoing embodiment and is not described herein in detail.
Step 406, determining sound data of a third user in the second environmental voice data according to the sound data of the second user using the smart headset.
The third user and the second user are different users, and in the teleconference scene, the third user is a user communicated with the second user.
Step 407, playing the sound data of the third user by using the intelligent earphone.
In this embodiment, the voice data of the second user is identified, so as to identify and obtain the voice frequency band or tone characteristic information of the voice data of the second user, and the voice data of the second user included in the second environmental voice data is identified according to the voice frequency band or tone characteristic information of the voice data of the second user, so that the voice data other than the voice data of the second user in the second environmental voice data is the voice data of the third user, and further, the voice data of the third user is played through the intelligent earphone to improve the voice size of the third user, and the voice data which does not belong to human voice in the voice data of the third user can be filtered and amplified and then played through the intelligent earphone, so that the second user wearing the intelligent earphone can clearly acquire the voice of the third user.
The sound data of the third user may be sound data of one user or sound data of a plurality of users.
The method for identifying the voice data of the second user may refer to the explanation in the foregoing embodiment, and the principles are the same, which is not repeated here.
Further, in response to controlling the smart headset to switch to the ac mode, the noise reduction mode is turned off and data transmission between the microphone and the terminal device is inhibited, and the inhibiting of data transmission between the microphone and the terminal device may be performed by the smart headset or by the terminal device, respectively, as described below.
In one implementation manner of the embodiment of the application, in response to controlling the intelligent earphone to switch to the communication mode, the noise reduction mode is closed so as to avoid noise reduction processing on the environmental voice data collected by the microphone.
Meanwhile, in order to avoid interference to a target working scene where the intelligent earphone is located, in the process that a second user using the earphone communicates with a surrounding third user, data transmission between the microphone and the terminal equipment is prohibited, that is, the microphone is prohibited from sending collected second environmental voice data to the terminal equipment, so that the terminal equipment is prevented from playing the second environmental voice data containing voice data of the third user in the target working scene, for example, a teleconference scene, that is, other clients participating in the teleconference are prevented from hearing communication sounds of the second user and the third user.
In another implementation manner of the embodiment of the present invention, when the intelligent earphone is switched to the communication mode according to the mode switching instruction, the user may be prompted to trigger a prohibition instruction in an interactive interface of the terminal device, so that the terminal device sends the prohibition instruction to the intelligent earphone through bluetooth, and the intelligent earphone prohibits data transmission between the microphone and the terminal device according to the obtained prohibition instruction sent by the terminal device, specifically may prohibit transmission of second environmental voice data collected by the microphone, so as to avoid transmission of voice content exchanged between the second user and surrounding third users to the terminal device, thereby avoiding the terminal device playing the exchanged voice content in a target working scene, for example, a teleconference scene, that is, avoiding other clients participating in a teleconference from hearing the voice content exchanged between the second user and the third user.
In the control method of the intelligent earphone, the second environmental voice data collected by the microphone on the intelligent earphone is obtained in response to the control of the intelligent earphone to be switched to the communication mode, the voice data of the third user in the second environmental voice data is determined according to the voice data of the second user using the intelligent earphone, the voice data of the third user is played by adopting the intelligent earphone, and the voice data of the third user in the environmental voice data is determined and played, so that the second user can smoothly communicate with the surrounding third user without picking up the intelligent earphone, the communication reliability of the user under the condition of wearing the intelligent earphone is improved, frequent picking up or wearing of the intelligent earphone is not needed, and the use convenience of the intelligent earphone is improved.
In order to achieve the above embodiments, the embodiments of the present application further provide a control device for an intelligent earphone.
Fig. 5 is a schematic structural diagram of a control device of an intelligent earphone according to an embodiment of the present application.
As shown in fig. 5, the apparatus may include:
and the obtaining module 51 is configured to obtain audio data played on the terminal device.
A first determining module 52, configured to identify the audio data, so as to determine that the terminal device is in a target working scenario; and in the target working scene, the microphone of the intelligent earphone is in a voice acquisition state.
The obtaining module 51 is further configured to obtain a mode switching instruction of the intelligent earphone in the target working scenario.
And the control module 53 is configured to control the intelligent earphone to switch between the at least two voice modes according to the mode switching instruction.
Further, in an implementation manner of the embodiment of the present application, the at least two voice modes include a noise reduction mode, and the apparatus further includes: the device comprises an identification module and a processing module.
The obtaining module 51 is further configured to obtain audio data played on the terminal device in response to controlling the intelligent earphone to switch to the noise reduction mode;
The identification module is used for identifying the audio data so as to identify and obtain first voice data contained in the audio data;
the acquiring module 51 is further configured to acquire first environmental voice data acquired by a microphone on the smart headset;
and the processing module is used for carrying out noise reduction processing on the first environment voice data according to the first voice data.
In an implementation manner of the embodiment of the present application, the processing module is specifically configured to:
acquiring second sound data of a first target user using the intelligent earphone;
and according to the first sound data and the second sound data, noise reduction processing is carried out on the sound data of the first user and the sound data of the second user in the first environment sound data.
In one implementation manner of the embodiment of the present application, the at least two voice modes include an ac mode, and the apparatus further includes: and the second determining module and the playing module.
The obtaining module 51 is further configured to obtain second environmental voice data collected by a microphone on the smart headset in response to controlling the smart headset to switch to the communication mode;
A second determining module, configured to determine sound data of a third user in the second environmental voice data according to sound data of a second user using the smart headset; wherein the third user and the second user are different users;
and the playing module is used for playing the sound data of the third user by adopting the intelligent earphone.
In one implementation manner of the embodiment of the present application, the at least two voice modes include an ac mode, and the apparatus further includes:
the obtaining module 51 is configured to obtain a first sound orientation of a second user using the smart headset in response to controlling the smart headset to switch to the communication mode;
the acquisition module is used for controlling the microphone to acquire sound data of a third user in the environment at a second sound direction except the first sound direction of the second user; wherein the third user and the second user are different users;
and the playing module is also used for playing the sound data of the third user by adopting the intelligent earphone.
In an implementation manner of an embodiment of the present application, the apparatus further includes:
and the first closing module is used for responding to control of the intelligent earphone to switch to the alternating current mode, closing the noise reduction mode and prohibiting data transmission between the microphone and the terminal equipment.
In an implementation manner of an embodiment of the present application, the apparatus further includes:
the second closing module is used for responding to control of the intelligent earphone to switch to the alternating current mode and closing the noise reduction mode; and responding to the obtained prohibition instruction sent by the terminal equipment, and prohibiting data transmission between the microphone and the terminal equipment according to the prohibition instruction.
In one implementation manner of the embodiment of the present application, the first determining module 52 is specifically configured to:
identifying the audio data and determining a voice frequency band included in the audio data;
determining that the terminal equipment is in a target working scene in response to the fact that the voice frequency band contains a first target voice frequency band and does not contain a second target voice frequency band and a third target voice frequency band; the lower frequency limit of the first target voice frequency band is greater than the upper frequency limit of the second target voice frequency band, and the upper frequency limit of the first target voice frequency band is less than the lower frequency limit of the third target voice frequency band.
In one implementation manner of the embodiment of the present application, the first determining module 52 is specifically configured to:
identifying source information carried by the audio data, and determining the source of the audio data;
And determining that the terminal equipment is in a target working scene in response to the source of the audio data as a target source.
It should be noted that the foregoing explanation of the method embodiment is also applicable to the apparatus of this embodiment, and will not be repeated here.
In the control device of the intelligent earphone, audio data played on the terminal equipment are acquired, the audio data are identified, so that the terminal equipment is determined to be in a target working scene, a microphone of the intelligent earphone is in a voice acquisition state in the target working scene, a mode switching instruction of the intelligent earphone is acquired in the target working scene, the intelligent earphone is controlled to switch between at least two voice modes according to the mode switching instruction, and the intelligent earphone is controlled to switch between at least two modes in the target working scene, so that a user can be helped when using the intelligent earphone without frequently picking off or taking the intelligent earphone, and the use convenience of the intelligent earphone is improved.
In order to implement the above embodiment, the application further proposes an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the method according to the above method embodiment when executing the program.
In order to implement the above-mentioned embodiments, the present application also proposes a non-transitory computer-readable storage medium, on which a computer program is stored, which program, when being executed by a processor, implements a method as described in the foregoing method embodiments.
In order to implement the above-described embodiments, the present application also proposes a computer program product having a computer program stored thereon, which, when being executed by a processor, implements a method as described in the method embodiments described above.
Fig. 6 is a block diagram of an electronic device according to an embodiment of the present application. For example, electronic device 800 may be a mobile phone, computer, digital broadcast terminal, messaging device, game console, tablet device, medical device, exercise device, personal digital assistant, or the like.
Referring to fig. 6, an electronic device 800 may include one or more of the following components: a processing component 802, a memory 804, a power component 806, a multimedia component 808, an audio component 810, an input/output (I/O) interface 812, a sensor component 814, and a communication component 816.
The processing component 802 generally controls overall operation of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 802 may include one or more processors 820 to execute instructions to perform all or part of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interactions between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the electronic device 800. Examples of such data include instructions for any application or method operating on the electronic device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
The power component 806 provides power to the various components of the electronic device 800. Power components 806 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for electronic device 800.
The multimedia component 808 includes a screen between the electronic device 800 and the user that provides an output interface. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may sense not only the boundary of a touch or slide action, but also the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front camera and/or a rear camera. When the electronic device 800 is in an operational mode, such as a shooting mode or a video mode, the front camera and/or the rear camera may receive external multimedia data. Each front camera and rear camera may be a fixed optical lens system or have focal length and optical zoom capabilities.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may be further stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 further includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be a keyboard, click wheel, buttons, etc. These buttons may include, but are not limited to: homepage button, volume button, start button, and lock button.
The sensor assembly 814 includes one or more sensors for providing status assessment of various aspects of the electronic device 800. For example, the sensor assembly 814 may detect an on/off state of the electronic device 800, a relative positioning of components, such as a display and keypad of the electronic device 800, the sensor assembly 814 may also detect a change in position of the electronic device 800 or a component of the electronic device 800, the presence or absence of a user's contact with the electronic device 800, an orientation or acceleration/deceleration of the electronic device 800, and a change in temperature of the electronic device 800. The sensor assembly 814 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact. The sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscopic sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate communication between the electronic device 800 and other devices, either wired or wireless. The electronic device 800 may access a wireless network based on a communication standard, such as WiFi,4G, or 5G, or a combination thereof. In one exemplary embodiment, the communication component 816 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In one exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic elements for executing the methods described above.
In an exemplary embodiment, a non-transitory computer readable storage medium is also provided, such as memory 804 including instructions executable by processor 820 of electronic device 800 to perform the above-described method. For example, the non-transitory computer readable storage medium may be ROM, random Access Memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present application. In this specification, schematic representations of the above terms are not necessarily directed to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, the different embodiments or examples described in this specification and the features of the different embodiments or examples may be combined and combined by those skilled in the art without contradiction.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In the description of the present application, the meaning of "plurality" is at least two, such as two, three, etc., unless explicitly defined otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and additional implementations are included within the scope of the preferred embodiment of the present application in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the embodiments of the present application.
Logic and/or steps represented in the flowcharts or otherwise described herein, e.g., a ordered listing of executable instructions for implementing logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). In addition, the computer readable medium may even be paper or other suitable medium on which the program is printed, as the program may be electronically captured, via, for instance, optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner, if necessary, and then stored in a computer memory.
It is to be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, the various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. As with the other embodiments, if implemented in hardware, may be implemented using any one or combination of the following techniques, as is well known in the art: discrete logic circuits having logic gates for implementing logic functions on data signals, application specific integrated circuits having suitable combinational logic gates, programmable Gate Arrays (PGAs), field Programmable Gate Arrays (FPGAs), and the like.
Those of ordinary skill in the art will appreciate that all or a portion of the steps carried out in the method of the above-described embodiments may be implemented by a program to instruct related hardware, where the program may be stored in a computer readable storage medium, and where the program, when executed, includes one or a combination of the steps of the method embodiments.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing module, or each unit may exist alone physically, or two or more units may be integrated in one module. The integrated modules may be implemented in hardware or in software functional modules. The integrated modules may also be stored in a computer readable storage medium if implemented in the form of software functional modules and sold or used as a stand-alone product.
The above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, or the like. Although embodiments of the present application have been shown and described above, it will be understood that the above embodiments are illustrative and not to be construed as limiting the application, and that variations, modifications, alternatives, and variations may be made to the above embodiments by one of ordinary skill in the art within the scope of the application.

Claims (20)

  1. The control method of the intelligent earphone is characterized by comprising the following steps:
    acquiring audio data played on the terminal equipment;
    identifying the audio data to determine that the terminal equipment is in a target working scene; the microphone of the intelligent earphone is in a voice acquisition state under the target working scene
    Acquiring a mode switching instruction of the intelligent earphone under the target working scene;
    and controlling the intelligent earphone to switch between the at least two voice modes according to the mode switching instruction.
  2. The method of claim 1, wherein the at least two speech modes include a noise reduction mode, the method further comprising:
    responding to control the intelligent earphone to switch to the noise reduction mode, and acquiring audio data played on the terminal equipment;
    Identifying the audio data to obtain voice data of a first user included in the audio data;
    acquiring first environmental voice data acquired by a microphone on the intelligent earphone;
    and carrying out noise reduction processing on the first environment voice data according to the voice data of the first user.
  3. The method of claim 2, wherein said denoising said first ambient speech data from said first user's voice data comprises:
    acquiring sound data of a second user using the intelligent earphone;
    and carrying out noise reduction processing on the sound data of the first user and the sound data of the second user in the first environment voice data according to the sound data of the first user and the sound data of the second user.
  4. The method of claim 1, wherein the at least two voice modes comprise an ac mode, the method further comprising:
    responding to control the intelligent earphone to switch to the communication mode, and acquiring second environmental voice data acquired by a microphone on the intelligent earphone;
    determining sound data of a third user in the second environment voice data according to sound data of a second user using the intelligent earphone; wherein the third user and the second user are different users;
    And playing the sound data of the third user by adopting the intelligent earphone.
  5. The method of claim 1, wherein the at least two voice modes comprise an ac mode, the method further comprising:
    responsive to controlling the smart headset to switch to the communication mode, obtaining a first sound orientation of a second user using the smart headset;
    controlling the microphone to collect sound data of a third user in the environment at a second sound location other than the first sound location; wherein the third user and the second user are different users; and playing the sound data of the third user by adopting the intelligent earphone.
  6. The method of claim 4 or 5, wherein the method further comprises:
    and responding to control of the intelligent earphone to switch to the alternating current mode, closing the noise reduction mode, and prohibiting data transmission between the microphone and the terminal equipment.
  7. The method of claim 4 or 5, wherein the method further comprises:
    responsive to controlling the smart headset to switch to the ac mode, turning off the noise reduction mode;
    and responding to the obtained prohibition instruction sent by the terminal equipment, and prohibiting data transmission between the microphone and the terminal equipment according to the prohibition instruction.
  8. The method of claim 1, wherein the identifying the audio data to determine that the terminal device is in a target operational scenario comprises:
    identifying the audio data and determining a voice frequency band included in the audio data;
    determining that the terminal equipment is in a target working scene in response to the fact that the voice frequency band contains a first target voice frequency band and does not contain a second target voice frequency band and a third target voice frequency band; the lower frequency limit of the first target voice frequency band is greater than the upper frequency limit of the second target voice frequency band, and the upper frequency limit of the first target voice frequency band is less than the lower frequency limit of the third target voice frequency band.
  9. The method of claim 1, wherein the identifying the audio data to determine that the terminal device is in a target operational scenario comprises:
    identifying source information carried by the audio data, and determining the source of the audio data;
    and determining that the terminal equipment is in a target working scene in response to the source of the audio data as a target source.
  10. A control device for an intelligent earphone, comprising:
    The acquisition module is used for acquiring the audio data played on the terminal equipment;
    the first determining module is used for identifying the audio data to determine that the terminal equipment is in a target working scene; in the target working scene, a microphone of the intelligent earphone is in a voice acquisition state;
    the acquisition module is further used for acquiring a mode switching instruction of the intelligent earphone under the target working scene;
    and the control module is used for controlling the intelligent earphone to switch between the at least two voice modes according to the mode switching instruction.
  11. The apparatus of claim 10, wherein the at least two speech modes comprise a noise reduction mode, the apparatus further comprising:
    the acquisition module is further used for responding to control of the intelligent earphone to switch to the noise reduction mode and acquiring audio data played on the terminal equipment;
    the identification module is used for identifying the audio data so as to identify and obtain first voice data contained in the audio data;
    the acquisition module is further used for acquiring first environmental voice data acquired by a microphone on the intelligent earphone;
    and the processing module is used for carrying out noise reduction processing on the first environment voice data according to the first voice data.
  12. The apparatus of claim 11, wherein the processing module is specifically configured to:
    acquiring second sound data of a first target user using the intelligent earphone;
    and according to the first sound data and the second sound data, noise reduction processing is carried out on the sound data of the first user and the sound data of the second user in the first environment sound data.
  13. The apparatus of claim 10, wherein the at least two voice modes comprise an ac mode, the apparatus further comprising:
    the acquisition module is further used for responding to control of the intelligent earphone to switch to the communication mode and acquiring second environmental voice data acquired by a microphone on the intelligent earphone;
    a second determining module, configured to determine sound data of a third user in the second environmental voice data according to sound data of a second user using the smart headset; wherein the third user and the second user are different users;
    and the playing module is used for playing the sound data of the third user by adopting the intelligent earphone.
  14. The apparatus of claim 10, wherein the at least two voice modes comprise an ac mode, the apparatus further comprising:
    The acquisition module is further used for responding to control of the intelligent earphone to switch to the communication mode and acquiring a first sound direction of a second user using the intelligent earphone;
    the acquisition module is used for controlling the microphone to acquire sound data of a third user in the environment at a second sound direction except the first sound direction of the second user; wherein the third user and the second user are different users;
    and the playing module is used for playing the sound data of the third user by adopting the intelligent earphone.
  15. The apparatus of claim 13 or 14, wherein the apparatus further comprises:
    and the first closing module is used for responding to control of the intelligent earphone to switch to the alternating current mode, closing the noise reduction mode and prohibiting data transmission between the microphone and the terminal equipment.
  16. The apparatus of claim 13 or 14, wherein the apparatus further comprises:
    the second closing module is used for responding to control of the intelligent earphone to switch to the alternating current mode and closing the noise reduction mode; and responding to the obtained prohibition instruction sent by the terminal equipment, and prohibiting data transmission between the microphone and the terminal equipment according to the prohibition instruction.
  17. The apparatus of claim 10, wherein the first determining module is specifically configured to:
    identifying the audio data and determining a voice frequency band included in the audio data;
    determining that the terminal equipment is in a target working scene in response to the fact that the voice frequency band contains a first target voice frequency band and does not contain a second target voice frequency band and a third target voice frequency band; the lower frequency limit of the first target voice frequency band is greater than the upper frequency limit of the second target voice frequency band, and the upper frequency limit of the first target voice frequency band is less than the lower frequency limit of the third target voice frequency band.
  18. The apparatus of claim 10, wherein the first determining module is specifically configured to:
    identifying source information carried by the audio data, and determining the source of the audio data;
    and determining that the terminal equipment is in a target working scene in response to the source of the audio data as a target source.
  19. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the method of any of claims 1-9 when the program is executed.
  20. A non-transitory computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when executed by a processor, implements the method according to any one of claims 1-9.
CN202280004138.1A 2022-06-20 2022-06-20 Control method and device of intelligent earphone, electronic equipment and storage medium Pending CN117751585A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2022/099966 WO2023245390A1 (en) 2022-06-20 2022-06-20 Smart earphone control method and apparatus, electronic device and storage medium

Publications (1)

Publication Number Publication Date
CN117751585A true CN117751585A (en) 2024-03-22

Family

ID=89378989

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202280004138.1A Pending CN117751585A (en) 2022-06-20 2022-06-20 Control method and device of intelligent earphone, electronic equipment and storage medium

Country Status (2)

Country Link
CN (1) CN117751585A (en)
WO (1) WO2023245390A1 (en)

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170318374A1 (en) * 2016-05-02 2017-11-02 Microsoft Technology Licensing, Llc Headset, an apparatus and a method with automatic selective voice pass-through
CN108156550B (en) * 2017-12-27 2020-03-31 上海传英信息技术有限公司 Playing method and device of headset
CN111464905A (en) * 2020-04-09 2020-07-28 电子科技大学 Hearing enhancement method and system based on intelligent wearable device and wearable device
CN113873378B (en) * 2020-06-30 2023-03-10 华为技术有限公司 Earphone noise processing method and device and earphone
CN113873379B (en) * 2020-06-30 2023-05-02 华为技术有限公司 Mode control method and device and terminal equipment
CN113099338A (en) * 2021-03-08 2021-07-09 头领科技(昆山)有限公司 Intelligent control's audio chip and wireless earphone of making an uproar that falls

Also Published As

Publication number Publication date
WO2023245390A1 (en) 2023-12-28

Similar Documents

Publication Publication Date Title
EP3163748B1 (en) Method, device and terminal for adjusting volume
KR101571993B1 (en) Method for voice calling method for voice playing, devices, program and storage medium thereof
CN106454644B (en) Audio playing method and device
CN111370018B (en) Audio data processing method, electronic device and medium
CN109087650B (en) Voice wake-up method and device
CN106888327B (en) Voice playing method and device
CN106375846B (en) The processing method and processing device of live audio
CN110392334B (en) Microphone array audio signal self-adaptive processing method, device and medium
CN111009239A (en) Echo cancellation method, echo cancellation device and electronic equipment
CN114466283A (en) Audio acquisition method and device, electronic equipment and peripheral component method
CN112243142A (en) Method, device and storage medium for processing audio data
CN105244037B (en) Audio signal processing method and device
CN111988704B (en) Sound signal processing method, device and storage medium
CN106603882A (en) Incoming call sound volume adjusting method, incoming call sound volume adjusting device and terminal
US11388281B2 (en) Adaptive method and apparatus for intelligent terminal, and terminal
CN109788367A (en) A kind of information cuing method, device, electronic equipment and storage medium
CN117751585A (en) Control method and device of intelligent earphone, electronic equipment and storage medium
CN109922203A (en) Terminal puts out screen method and apparatus
CN109408025B (en) Audio playing method, device and storage medium
CN108491180B (en) Audio playing method and device
CN112637416A (en) Volume adjusting method and device and storage medium
CN112118502B (en) Earphone control method, device, equipment and storage medium
CN117636893A (en) Wind noise detection method and device, wearable equipment and readable storage medium
CN107454306B (en) Method and device for collecting image
CN106131346B (en) Call processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination