CN111615036B - Data processing method and device and electronic equipment - Google Patents

Data processing method and device and electronic equipment Download PDF

Info

Publication number
CN111615036B
CN111615036B CN202010307941.5A CN202010307941A CN111615036B CN 111615036 B CN111615036 B CN 111615036B CN 202010307941 A CN202010307941 A CN 202010307941A CN 111615036 B CN111615036 B CN 111615036B
Authority
CN
China
Prior art keywords
wireless audio
audio device
user
voiceprint
voiceprint information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010307941.5A
Other languages
Chinese (zh)
Other versions
CN111615036A (en
Inventor
马兰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Goertek Techology Co Ltd
Original Assignee
Goertek Techology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Goertek Techology Co Ltd filed Critical Goertek Techology Co Ltd
Priority to CN202010307941.5A priority Critical patent/CN111615036B/en
Publication of CN111615036A publication Critical patent/CN111615036A/en
Application granted granted Critical
Publication of CN111615036B publication Critical patent/CN111615036B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R5/00Stereophonic arrangements
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R5/00Stereophonic arrangements
    • H04R5/033Headphones for stereophonic communication
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R5/00Stereophonic arrangements
    • H04R5/04Circuit arrangements, e.g. for selective connection of amplifier inputs/outputs to loudspeakers, for loudspeaker detection, or for adaptation of settings to personal preferences or hearing impairments

Abstract

The method comprises the steps that after first voiceprint information and second voiceprint information corresponding to a first wireless audio device and a second wireless audio device respectively are obtained, whether a first user corresponding to the first voiceprint information and a second user corresponding to the second voiceprint information are the same user or not can be judged through a matching result obtained after the first voiceprint information and the second voiceprint information are matched; according to the matching result, the working modes of the first wireless audio device and the second wireless audio device can be conveniently and quickly adjusted to adapt to the use requirements of users, the method can reduce user operation, and the user experience is improved.

Description

Data processing method and device and electronic equipment
Technical Field
The present disclosure relates to the field of wireless communication technologies, and in particular, to a data processing method and apparatus, and an electronic device.
Background
In recent years, with the development of wireless communication technology, wireless audio devices such as True Wireless Stereo (TWS) earphones and wireless speakers are widely used, which brings great convenience to life.
When the current wireless audio device, for example, the TWS headset is used, it can be switched between a monaural mode and a binaural mode, where in the binaural mode, usually, one sub-headset of the headset is used as a master headset, and the other sub-headset is used as a slave headset, and is connected to the master headset by being bound to play the same audio content as the master headset. In addition, the user can also unbind the master earphone and the slave earphone, so that the two sub-earphones can respectively and independently work to simultaneously play different audio contents, for example, the two sub-earphones can be unbound and then respectively used by the two users, and the two users can simultaneously listen to different music by using the sub-earphones.
However, when it is required that each wireless audio device in the wireless audio apparatus operates independently, a user generally needs to perform an unbinding operation manually; when the user needs to work in the binaural mode again, that is, in the master-slave mode, the user needs to manually bind each wireless audio device again, and the above operation process has the problems of inconvenient operation, time consumption and labor consumption. In view of the above problem, it is necessary to provide a method for enabling each wireless audio device in a wireless audio apparatus to adaptively adjust its operating mode according to different users, so as to save time for the users.
Disclosure of Invention
It is an object of embodiments of the present disclosure to provide a data processing method for a wireless audio device.
According to a first aspect of the present disclosure, there is provided a data processing method applied to a wireless audio apparatus composed of a first wireless audio device and a second wireless audio device, the method including:
acquiring first voiceprint information corresponding to the first wireless audio device, wherein the first voiceprint information is obtained by carrying out voiceprint recognition on first preset voice information sent by a first user, and the first user is a user using the first wireless audio device;
acquiring second voiceprint information corresponding to the second wireless audio device, wherein the second voiceprint information is obtained by carrying out voiceprint recognition on second preset voice information sent by a second user, and the second user is a user using the second wireless audio device;
obtaining a matching result of the first voiceprint information and the second voiceprint information;
and adjusting the working modes of the first wireless audio device and the second wireless audio device according to the matching result.
Optionally, the adjusting the working modes of the first wireless audio device and the second wireless audio device according to the matching result includes:
adjusting the first wireless audio device to a master working mode and the second wireless audio device to a slave working mode under the condition that the first user and the second user are judged to be the same user according to the matching result;
and under the condition that the first user and the second user are not the same user according to the matching result, adjusting the first wireless audio device and the second wireless audio device to be in independent working modes.
Optionally, the method further comprises:
adjusting the pairing state of the first wireless audio device according to the adjusted working mode of the first wireless audio device; and
adjusting the pairing state of the second wireless audio device according to the adjusted working mode of the second wireless audio device;
wherein the pairing status is used to characterize whether the corresponding wireless audio device is paired with an audio output device.
Optionally, the acquiring first voiceprint information corresponding to the first wireless audio device includes:
acquiring the first preset voice information sent by the first user;
and carrying out voiceprint recognition on the first preset voice information to acquire the first voiceprint information.
Optionally, the performing voiceprint recognition on the first preset voice message to obtain the first voiceprint message includes:
and carrying out voiceprint recognition on the first preset voice information by using a preset bone conduction voiceprint recognition method to obtain the first voiceprint information.
Optionally, the method further comprises:
judging whether the first wireless audio device is in a working state or not according to first state detection data acquired by a first working state detection device, wherein the first working state detection device is used for detecting whether the first wireless audio device is in the working state or not; and
judging whether the second wireless audio device is in a working state or not according to second state detection data acquired by a second working state detection device, wherein the second working state detection device is used for detecting whether the second wireless audio device is in the working state or not;
under the condition that the first wireless audio device is in a working state, executing a step of acquiring first voiceprint information corresponding to the first wireless audio device; and
and under the condition that the second wireless audio device is in the working state, executing the step of acquiring second voiceprint information corresponding to the second wireless audio device.
Optionally, the first state detection data includes first level change data corresponding to the first operating state detection device, and the second state detection data includes second level change data corresponding to the second operating state detection device.
Optionally, the wireless audio device comprises a true wireless stereo headset.
According to a second aspect of the present disclosure, the present disclosure also provides a data processing apparatus applied to a wireless audio device composed of a first wireless audio apparatus and a second wireless audio apparatus, including:
a first voiceprint information acquisition module, configured to acquire first voiceprint information corresponding to the first wireless audio device, where the first voiceprint information is obtained by performing voiceprint recognition on first preset voice information sent by a first user, and the first user is a user using the first wireless audio device;
a second fingerprint information obtaining module, configured to obtain second fingerprint information corresponding to the second wireless audio device, where the second fingerprint information is obtained by performing a voiceprint recognition on second preset voice information sent by a second user, and the second user is a user using the second wireless audio device;
a matching result obtaining module, configured to obtain a matching result of the first voiceprint information and the second voiceprint information;
and the working mode adjusting module is used for adjusting the working modes of the first wireless audio device and the second wireless audio device according to the matching result.
According to a third aspect of the present disclosure, there is also provided an electronic device comprising the apparatus according to the second aspect of the present disclosure; alternatively, the first and second electrodes may be,
the electronic device includes: a memory for storing executable instructions; a processor configured to execute the electronic device according to the control of the instruction to perform the method according to the first aspect of the present disclosure.
One advantageous effect of the present disclosure is that according to the method of the embodiment of the present disclosure, after acquiring the first voiceprint information and the second voiceprint information corresponding to the first wireless audio device and the second wireless audio device respectively, the wireless audio device can determine whether the first user corresponding to the first voiceprint information and the second user corresponding to the second voiceprint information are the same user by using the matching result obtained after matching the first voiceprint information and the second voiceprint information, and then can adaptively adjust the working modes of the first wireless audio device and the second wireless audio device according to the matching result to adapt to the user requirement. Compared with a method for manually adjusting the working mode of the wireless audio equipment according to the user requirement, the method is more convenient and faster.
Other features of the present disclosure and advantages thereof will become apparent from the following detailed description of exemplary embodiments thereof, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of the specification, illustrate embodiments of the disclosure and together with the description, serve to explain the principles of the disclosure.
Fig. 1 is a schematic flowchart illustrating a method for determining an operating status of a wireless audio device according to an embodiment of the disclosure.
Fig. 2 is a schematic flowchart of a data processing method according to an embodiment of the present disclosure.
Fig. 3 is a schematic flowchart illustrating a process of adjusting a pairing status of a wireless audio device according to an embodiment of the present disclosure.
Fig. 4 is a schematic block diagram of a data processing apparatus according to an embodiment of the present disclosure.
Fig. 5 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present disclosure.
Detailed Description
Various exemplary embodiments of the present disclosure will now be described in detail with reference to the accompanying drawings. It should be noted that: the relative arrangement of the components and steps, the numerical expressions, and numerical values set forth in these embodiments do not limit the scope of the present disclosure unless specifically stated otherwise.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the disclosure, its application, or uses.
Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate.
In all examples shown and discussed herein, any particular value should be construed as merely illustrative, and not limiting. Thus, other examples of the exemplary embodiments may have different values.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, further discussion thereof is not required in subsequent figures.
< method examples >
Fig. 1 is a schematic flow chart illustrating a method for determining an operating state of a wireless audio device according to an embodiment of the present disclosure. Compared with the problems of inconvenient operation, time consumption and labor consumption existing in the methods in the prior art, the data processing method provided by the embodiment does not need manual operation of a user when being implemented specifically, but conveniently judges the working state of each wireless audio device in the wireless audio equipment through the state detection data collected by the working state detection devices built in the wireless audio devices of the wireless audio equipment, and further determines whether to implement the method provided by the embodiment.
As shown in fig. 1, before the data processing method provided in this embodiment is implemented, the operating state of the wireless audio device may be further determined by the following steps:
step S1100, according to first state detection data acquired by a first working state detection device, determining whether a first wireless audio device is in a working state, wherein the first working state detection device is used for detecting whether the first wireless audio device is in the working state.
Step S1200, determining whether the second wireless audio device is in a working state according to second state detection data acquired by a second working state detection device, where the second working state detection device is configured to detect whether the second wireless audio device is in a working state.
And S1300, under the condition that the first wireless audio device is in the working state, executing the step of acquiring the first voiceprint information corresponding to the first wireless audio device. And
and S1400, under the condition that the second wireless audio device is in the working state, executing the step of acquiring second voiceprint information corresponding to the second wireless audio device.
It should be noted that, the steps of "acquiring the first voiceprint information corresponding to the first wireless audio device" and "acquiring the second voiceprint information corresponding to the second wireless audio device" will be described in the following processing, and here, only how to determine whether the wireless audio device is in the operating state according to the acquired state detection data corresponding to each wireless audio device is described.
In a specific implementation, the first state detection data may be first level change data corresponding to the first operating state detection device, and the second state detection data may be second level change data corresponding to the second operating state detection device, where the operating state detection device may be a Sensor (Sensor) for detecting an operating state of the wireless audio device.
Here, the wireless audio device is taken as a TWS headset, and the first wireless audio device and the second wireless audio device are respectively two sub-headsets in the TWS headset, for example, two wearing sensors for detecting working states of the two sub-headsets may be respectively arranged in the two sub-headsets of the TWS, when a certain sub-headset is worn by a user, a level in the corresponding wearing sensor will change, and at this time, it may be determined that the sub-headset is in a wearing mode, that is, in a working state, according to level change data of the wearing sensors; correspondingly, when another sub-earphone is worn by the user, the sub-earphone can also be judged to be in the working state through the level change data of the corresponding wearing sensor.
Through the steps, the working state of each wireless audio device in the wireless audio equipment can be conveniently judged and obtained, and further whether to implement the data processing method provided by the embodiment can be determined.
Fig. 2 is a flowchart of a data processing method provided by an embodiment of the present disclosure, and when implemented, the method may be applied to a wireless audio device composed of a first wireless audio apparatus and a second wireless audio apparatus, for example, the method may be applied to a true wireless stereo headset composed of a first sub-headset and a second sub-headset.
According to fig. 2, the data processing method provided in this embodiment may include the following steps:
step S2100 obtains first voiceprint information corresponding to the first wireless audio device, where the first voiceprint information is obtained by performing voiceprint recognition on first preset voice information sent by a first user, and the first user is a user using the first wireless audio device.
Voiceprint (Voiceprint) information refers to a sound wave frequency spectrum carrying speech information obtained after Voiceprint recognition is carried out on a certain section of voice information, and the Voiceprint not only has specificity, but also has the characteristic of relative stability. Generally speaking, the voice of people is relatively stable and is not easy to change, and experiments prove that the voiceprint information of the speaker is always different even if the speaker deliberately imitates the voice and tone of other people or speaks with whisper and whisper. In this embodiment, in order to conveniently detect whether a first user using a first wireless audio device and a second user using a second wireless audio device are the same user at the same time, whether the first user and the second user are the same user is detected by respectively acquiring voiceprint information of the users corresponding to the wireless audio devices.
The first preset voice message is a voice message corresponding to the first wireless audio device, and specifically, the first preset voice message may be a preset voice message sent by a first user corresponding to the first wireless audio device in a prompt tone manner when the first wireless audio device enters a working state.
The obtaining first voiceprint information corresponding to the first wireless audio device includes: acquiring the first preset voice information sent by the first user; and carrying out voiceprint recognition on the first preset voice information to acquire the first voiceprint information.
For example, after the user wears a first sub-headset in the TWS headset, the first sub-headset detects that the first sub-headset is in a wearing state through a built-in wearing sensor, and at this time, the first sub-headset may prompt the user to send out a voice message of "weather is good today"; after the user sends the voice information, the first sub-earphone acquires the voice information through the microphone device and carries out voiceprint recognition on the voice information through a built-in voiceprint recognition algorithm so as to acquire first voiceprint information corresponding to the first sub-earphone. It should be noted that, because the prior art has detailed descriptions about how to use the voiceprint recognition algorithm to perform voiceprint recognition on the voice information, the details are not described herein again; in addition, the first preset voice message may also be set according to specific needs, or other methods may also be used to obtain the first preset voice message, which is not limited herein.
In specific implementation, in order to increase recognition accuracy when voiceprint recognition is performed on voice information sent by a user, the voiceprint recognition is performed on the first preset voice information to acquire the first voiceprint information, and the method includes: and carrying out voiceprint recognition on the first preset voice information by using a preset bone conduction voiceprint recognition method to obtain the first voiceprint information.
Bone conduction (BoneConduction) is a way of transmitting sound waves to the inner ear, and specifically refers to a way of generating corresponding fluctuation of perilymph by sound waves directly through the skull and activating the spiral organ of the cochlea to generate auditory sense. In this embodiment, in order to increase the recognition accuracy when performing voiceprint recognition on the voice information sent by the user, a first preset voice information sent by a first user may be collected in a bone conduction manner, and the voiceprint recognition is performed on the first preset voice information to obtain a first voiceprint information with higher accuracy.
Step S2200 is to obtain second voiceprint information corresponding to the second wireless audio device, where the second voiceprint information is obtained by performing voiceprint recognition on second preset voice information sent by a second user, and the second user is a user using the second wireless audio device.
Corresponding to the above method for acquiring the first voiceprint information, the acquiring the second voiceprint information corresponding to the second wireless audio device includes: acquiring the second preset voice information sent by the second user; and carrying out voiceprint recognition on the second preset voice information to acquire the second voiceprint information.
Wherein, to predetermine speech information the second and carry out voiceprint recognition, obtain the second voiceprint information, include: and performing voiceprint recognition on the second preset voice information by using a preset bone conduction voiceprint recognition method to obtain the second voiceprint information.
For avoiding repetition, please refer to the related description in step S2100 for detailed processing on how to obtain the second voiceprint information corresponding to the second wireless audio device, which is not described herein again.
It should be noted that, in this embodiment, "first" and "second" in the first wireless audio apparatus and the second wireless audio apparatus are a relative description, and are used to distinguish and describe the wireless audio apparatuses constituting the wireless audio device, but not to refer to a certain wireless audio apparatus specifically. The first user and the second user may be the same user or different users. In addition, the first preset voice message may be a segment of voice message that is the same as the second preset voice message, or may be set as a different voice message according to the requirement, and is not particularly limited herein.
After the above steps are respectively obtained, the first voiceprint information and the second voiceprint information corresponding to the first wireless audio device and the second wireless audio device are obtained, and then whether the first user and the second user are the same user or not can be judged according to the first voiceprint information and the second voiceprint information, so that the working modes of the first wireless audio device and the second wireless audio device can be adjusted in a self-adaptive manner.
Step S2300, obtaining a matching result of the first voiceprint information and the second voiceprint information.
Because the voiceprint information of different users is relatively stable and not easy to change, in this embodiment, the obtained first voiceprint information and the obtained second voiceprint information can be matched, and whether the first user and the second user are the same user or not can be conveniently judged according to the matching result.
Still taking the TWS headset as an example here, for example, after the user a wears the first sub-headset of the TWS headset, the user a sends out the first preset voice message of "good weather today" according to the prompt tone, and the first sub-headset can obtain the first voiceprint information by collecting the voice message and performing voiceprint recognition; after the user B wears the second sub-earphone of the TWS earphone, the user B sends out second voice information of 'good weather today' according to the prompt tone, and the second sub-earphone collects the voice information and carries out voiceprint recognition to obtain second voiceprint information; and then, matching the acquired first voiceprint information with the acquired second voiceprint information to obtain a matching result.
Step S2400, adjusting the working modes of the first wireless audio device and the second wireless audio device according to the matching result.
In the present embodiment, the matching result may be represented in a numerical form, for example, a first voiceprint information and a second voiceprint information are identified as completely matching by "1", and a complete mismatch thereof is identified as "0"; or, the corresponding matching degree is represented by a decimal value between 0 and 1, and when the matching degree is not less than a certain preset threshold value, the first voiceprint information and the second voiceprint information can be judged to be matched with each other; or, the matching result may also represent whether the first voiceprint information and the second voiceprint information are matched in the form of "yes" or "no", and details are not repeated here.
After obtaining the matching result of the first voiceprint information and the second voiceprint information through step S2300, if the matching result indicates that the first voiceprint information and the second voiceprint information match, it may be determined that the first user using the first wireless audio device and the second user using the second wireless audio device are the same user; and if the matching result represents that the first voiceprint information and the second voiceprint information are not matched, the first user and the second user can be judged to be different users, namely the user using the wireless audio equipment in the same time is at least two different users.
The adjusting the working modes of the first wireless audio device and the second wireless audio device according to the matching result comprises: adjusting the first wireless audio device to a master working mode and the second wireless audio device to a slave working mode under the condition that the first user and the second user are judged to be the same user according to the matching result; and under the condition that the first user and the second user are not the same user according to the matching result, adjusting the first wireless audio device and the second wireless audio device to be in independent working modes.
The adjusting the first wireless audio device to the master operating mode and the second wireless audio device to the slave operating mode means that the first wireless audio device and the second wireless audio device are automatically bound to enable the first wireless audio device and the second wireless audio device to play the same audio content, that is, the wireless audio device operates in the master-slave operating mode, when it is determined that the users using the first wireless audio device and the second wireless audio device are the same user.
Since the prior art has detailed descriptions of how the wireless audio device provides services to the user in a master-slave mode of operation, this is only briefly described here. Still taking the TWS headset as an example here, for example, in a case that it is determined that the user wearing the first sub-headset and the second sub-headset is the same user, the first sub-headset may be set as a master headset, and the second sub-headset may be set as a slave headset, that is, the sub-headset worn first by the user is set as the master headset, and the sub-headset worn later is set as the slave headset; then, the master earphone may be connected to the audio output device through the bluetooth protocol, and connected to the slave earphone through the bluetooth protocol, so that the slave earphone may play the same audio content as the master earphone, where the audio output device may be a device capable of outputting the audio content, for example, a device such as a mobile phone, a tablet computer, or a computer. In addition, the description is given by taking the master and slave earphones connected by the bluetooth protocol as an example, and with the continuous advancement of the technology, in the specific implementation, the master and slave earphones may be connected by another protocol, or the audio output device may provide the audio content to the master and slave earphones at the same time, which is not described herein again.
The adjusting of the first wireless audio device and the second wireless audio device to the independent working modes means that the first wireless audio device and the second wireless audio device are directly controlled by the wireless audio device to be respectively set to the independent working modes in order to reduce user operations under the condition that the users using the first wireless audio device and the second wireless audio device are not the same user, so that the first wireless audio device and the second wireless audio device can be respectively connected with different audio output devices to play different audio contents.
Please refer to fig. 3, which is a flowchart illustrating a process for adjusting a pairing status of a wireless audio device according to an embodiment of the present disclosure. As shown in fig. 3, in a specific implementation, in order to reduce user operations and further improve user experience, after the first wireless audio device and the second wireless audio device are respectively adjusted to appropriate operating modes according to the obtained matching result, the pairing state of the first wireless audio device and the second wireless audio device may be further adjusted, specifically including the following steps.
Step S3100, adjusting a pairing state of the first wireless audio device according to the adjusted operating mode of the first wireless audio device. And
step S3200, adjusting a pairing state of the second wireless audio device according to the operating mode of the second wireless audio device after being adjusted; wherein the pairing status is used to characterize whether the corresponding wireless audio device is paired with an audio output device.
Still taking the TWS headset as an example here, for example, in a case that it is determined that a user wearing the first sub-headset and the second sub-headset is the same user, after the first sub-headset is set as a master headset and the second sub-headset is set as a slave headset, that is, the first sub-headset and the second sub-headset are automatically bound, the master headset may be further adjusted to a pairing mode, so that the user may detect the master headset in a pairing module of the audio output device, thereby conveniently performing pairing connection processing between devices. Or, under the condition that the users wearing the first sub-earphone and the second sub-earphone are judged to be different users, the first sub-earphone and the second sub-earphone can be set to be in independent working modes, namely, the first sub-earphone and the second sub-earphone are automatically unbound; meanwhile, in order to further reduce user operations, the first sub-headset and the second sub-headset can be adjusted to be in a pairing mode after the working mode is set, so that a user can detect the first sub-headset and the second sub-headset in a pairing module of the audio output device, and pairing connection processing between the devices is conveniently carried out.
It should be noted that, in the above description, the data processing method provided by the present disclosure is described in detail by taking the wireless audio device as the TWS headset, and the first wireless audio device and the second wireless audio device are respectively different sub-headsets in the TWS headset, when the method is specifically implemented, the method may also be applied to other wireless audio devices, for example, the method may also be applied to a wireless sound box including at least two sub-sound boxes, and in practice, different sub-sound boxes may be arranged at different positions of the same working environment, so as to provide audio content with a better sound field for a user; different sub-speakers can also be arranged in different working environments to provide different audio contents for different users, which is not described herein again.
In summary, in the data processing method provided in this embodiment, after the wireless audio device acquires the first voiceprint information and the second voiceprint information corresponding to the first wireless audio device and the second wireless audio device, it can be determined whether the first user corresponding to the first voiceprint information and the second user corresponding to the second voiceprint information are the same user by matching the first voiceprint information and the second voiceprint information, and then the working modes of the first wireless audio device and the second wireless audio device can be adaptively adjusted according to the matching result, so as to adapt to the user usage requirements. Compared with a method for manually adjusting the working mode of the wireless audio equipment according to the user requirement, the method is more convenient and faster, and the user experience can be improved.
< apparatus embodiment >
Corresponding to the data processing method provided in the foregoing method embodiment, this embodiment further provides a data processing apparatus, as shown in fig. 4, the apparatus 4000 may be applied to a wireless audio device composed of a first wireless audio device and a second wireless audio device, and specifically may include a first voiceprint information obtaining module 4100, a second voiceprint information obtaining module 4200, a matching result obtaining module 4300, and an operation mode adjusting module 4400.
The first voiceprint information obtaining module 4100 is configured to obtain first voiceprint information corresponding to the first wireless audio device, where the first voiceprint information is obtained by performing voiceprint recognition on first preset voice information sent by a first user, and the first user is a user using the first wireless audio device.
In one embodiment, the first voiceprint information acquisition module 4100 includes:
the first preset voice information acquisition submodule is used for acquiring the first preset voice information sent by the first user;
and the first voiceprint information acquisition submodule is used for carrying out voiceprint recognition on the first preset voice information to acquire the first voiceprint information.
In one embodiment, the first voiceprint information acquisition submodule includes:
and the bone conduction voiceprint recognition submodule is used for carrying out voiceprint recognition on the first preset voice information by using a preset bone conduction voiceprint recognition method to obtain the first voiceprint information.
The second fingerprint information obtaining module 4200 is configured to obtain second fingerprint information corresponding to the second wireless audio device, where the second fingerprint information is obtained by performing a voiceprint recognition on second preset voice information sent by a second user, and the second user is a user using the second wireless audio device.
The matching result obtaining module 4300 is configured to obtain a matching result of the first voiceprint information and the second voiceprint information.
The working mode adjusting module 4400 is configured to adjust the working modes of the first wireless audio apparatus and the second wireless audio apparatus according to the matching result.
In one embodiment, the operation mode adjusting module 4400 includes:
a master-slave working mode adjusting submodule, configured to adjust the first wireless audio device to a master working mode and adjust the second wireless audio device to a slave working mode when it is determined that the first user and the second user are the same user according to the matching result;
and the independent working mode adjusting submodule is used for adjusting the first wireless audio device and the second wireless audio device into independent working modes under the condition that the first user and the second user are not the same user according to the matching result.
In one embodiment, the apparatus 4000 further comprises:
a first pairing state adjusting module, configured to adjust a pairing state of the first wireless audio device according to the adjusted working mode of the first wireless audio device; and
a second matching state adjusting module, configured to adjust a matching state of the second wireless audio device according to the adjusted operating mode of the second wireless audio device;
wherein the pairing status is used to characterize whether the corresponding wireless audio device is paired with an audio output device.
< apparatus embodiment >
In this embodiment, an electronic device is further provided, which may include the data processing apparatus 4000 according to any embodiment of the present disclosure, and is configured to implement any one of the data processing methods in the method embodiment of the present disclosure.
As shown in fig. 5, the electronic device 5000 may further include a processor 5200 and a memory 5100, the memory 5100 being configured to store executable instructions; the processor 5200 is configured to operate the electronic device to perform any one of the data processing methods according to the disclosed method embodiments, according to the control of the instructions.
The various modules of the above apparatus 4000 may be implemented by the processor 5200 executing the instructions to perform any one of the data processing methods according to the method embodiments of the present disclosure.
The electronic device 5000 may be a terminal device, specifically, a wireless audio device including at least two wireless audio devices, or may also be another device that communicates with a wireless audio device in the wireless audio device through direct or indirect connection to send a control instruction, and is not limited in this respect.
The present disclosure may be systems, methods, and/or computer program products. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for causing a processor to implement various aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present disclosure may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry that can execute the computer-readable program instructions implements aspects of the present disclosure by utilizing the state information of the computer-readable program instructions to personalize the electronic circuitry, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA).
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. It is well known to those skilled in the art that implementation by hardware, by software, and by a combination of software and hardware are equivalent.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein. The scope of the present disclosure is defined by the appended claims.

Claims (10)

1. A data processing method is applied to wireless audio equipment consisting of a first wireless audio device and a second wireless audio device, and comprises the following steps:
acquiring first voiceprint information corresponding to the first wireless audio device, wherein the first voiceprint information is obtained by carrying out voiceprint recognition on first preset voice information sent by a first user, and the first user is a user using the first wireless audio device;
acquiring second voiceprint information corresponding to the second wireless audio device, wherein the second voiceprint information is obtained by carrying out voiceprint recognition on second preset voice information sent by a second user, and the second user is a user using the second wireless audio device;
obtaining a matching result of the first voiceprint information and the second voiceprint information, wherein the matching result is used for judging whether the first user and the second user are the same user;
and adjusting the working modes of the first wireless audio device and the second wireless audio device according to the matching result, wherein the working modes comprise a master-slave working mode and an independent working mode.
2. The method of claim 1, the adjusting the operating mode of the first wireless audio device and the second wireless audio device according to the matching result comprising:
adjusting the first wireless audio device to a master working mode and the second wireless audio device to a slave working mode under the condition that the first user and the second user are judged to be the same user according to the matching result;
and under the condition that the first user and the second user are not the same user according to the matching result, adjusting the first wireless audio device and the second wireless audio device to be in the independent working mode.
3. The method of claim 1, further comprising:
adjusting the pairing state of the first wireless audio device according to the adjusted working mode of the first wireless audio device; and
adjusting the pairing state of the second wireless audio device according to the adjusted working mode of the second wireless audio device;
wherein the pairing status is used to characterize whether the corresponding wireless audio device is paired with an audio output device.
4. The method of claim 1, the obtaining first voiceprint information corresponding to the first wireless audio device comprising:
acquiring the first preset voice information sent by the first user;
and carrying out voiceprint recognition on the first preset voice information to acquire the first voiceprint information.
5. The method according to claim 4, wherein the performing voiceprint recognition on the first preset voice message to obtain the first voiceprint message includes:
and carrying out voiceprint recognition on the first preset voice information by using a preset bone conduction voiceprint recognition method to obtain the first voiceprint information.
6. The method of claim 1, further comprising:
judging whether the first wireless audio device is in a working state or not according to first state detection data acquired by a first working state detection device, wherein the first working state detection device is used for detecting whether the first wireless audio device is in the working state or not; and
judging whether the second wireless audio device is in a working state or not according to second state detection data acquired by a second working state detection device, wherein the second working state detection device is used for detecting whether the second wireless audio device is in the working state or not;
under the condition that the first wireless audio device is in a working state, executing a step of acquiring first voiceprint information corresponding to the first wireless audio device; and
and under the condition that the second wireless audio device is in the working state, executing the step of acquiring second voiceprint information corresponding to the second wireless audio device.
7. The method of claim 6, the first state detection data comprising first level change data corresponding to the first operating state detection device, the second state detection data comprising second level change data corresponding to the second operating state detection device.
8. The method of any of claims 1-7, the wireless audio device comprising a true wireless stereo headset.
9. A data processing apparatus applied to a wireless audio device composed of a first wireless audio apparatus and a second wireless audio apparatus, comprising:
a first voiceprint information acquisition module, configured to acquire first voiceprint information corresponding to the first wireless audio device, where the first voiceprint information is obtained by performing voiceprint recognition on first preset voice information sent by a first user, and the first user is a user using the first wireless audio device;
a second fingerprint information obtaining module, configured to obtain second fingerprint information corresponding to the second wireless audio device, where the second fingerprint information is obtained by performing a voiceprint recognition on second preset voice information sent by a second user, and the second user is a user using the second wireless audio device;
a matching result obtaining module, configured to obtain a matching result of the first voiceprint information and the second voiceprint information, where the matching result is used to determine whether the first user and the second user are the same user;
and the working mode adjusting module is used for adjusting the working modes of the first wireless audio device and the second wireless audio device according to the matching result, wherein the working modes comprise a master-slave working mode and an independent working mode.
10. An electronic device comprising the apparatus of claim 9; alternatively, the first and second electrodes may be,
the electronic device includes:
a memory for storing executable instructions;
a processor configured to execute the electronic device to perform the method according to the control of the instruction, wherein the method is as claimed in any one of claims 1 to 8.
CN202010307941.5A 2020-04-17 2020-04-17 Data processing method and device and electronic equipment Active CN111615036B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010307941.5A CN111615036B (en) 2020-04-17 2020-04-17 Data processing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010307941.5A CN111615036B (en) 2020-04-17 2020-04-17 Data processing method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN111615036A CN111615036A (en) 2020-09-01
CN111615036B true CN111615036B (en) 2021-07-23

Family

ID=72204632

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010307941.5A Active CN111615036B (en) 2020-04-17 2020-04-17 Data processing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN111615036B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107708014A (en) * 2017-11-08 2018-02-16 深圳市沃特沃德股份有限公司 Method and device for automatically switching master-slave relation of wireless earphone and wireless earphone
CN109379653A (en) * 2018-09-30 2019-02-22 Oppo广东移动通信有限公司 Audio frequency transmission method, device, electronic equipment and storage medium
CN110381485A (en) * 2019-06-14 2019-10-25 华为技术有限公司 Bluetooth communication method, TWS bluetooth headset and terminal
CN110933550A (en) * 2019-12-05 2020-03-27 歌尔股份有限公司 Control method and system of wireless earphone pair and wireless communication equipment

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103533497A (en) * 2013-10-09 2014-01-22 上海斐讯数据通信技术有限公司 Sound channel switching device and sound channel switching method of stereo playing system
CN108966066A (en) * 2018-03-07 2018-12-07 深圳市哈尔马科技有限公司 A kind of real time translation interactive system based on wireless headset
US11494472B2 (en) * 2018-07-11 2022-11-08 Realwear, Inc. Voice activated authentication

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107708014A (en) * 2017-11-08 2018-02-16 深圳市沃特沃德股份有限公司 Method and device for automatically switching master-slave relation of wireless earphone and wireless earphone
CN109379653A (en) * 2018-09-30 2019-02-22 Oppo广东移动通信有限公司 Audio frequency transmission method, device, electronic equipment and storage medium
CN110381485A (en) * 2019-06-14 2019-10-25 华为技术有限公司 Bluetooth communication method, TWS bluetooth headset and terminal
CN110933550A (en) * 2019-12-05 2020-03-27 歌尔股份有限公司 Control method and system of wireless earphone pair and wireless communication equipment

Also Published As

Publication number Publication date
CN111615036A (en) 2020-09-01

Similar Documents

Publication Publication Date Title
CN111447539B (en) Fitting method and device for hearing earphones
CN109195045B (en) Method and device for detecting wearing state of earphone and earphone
CN110089129B (en) On/off-head detection of personal sound devices using earpiece microphones
US8526649B2 (en) Providing notification sounds in a customizable manner
CN112770214B (en) Earphone control method and device and earphone
US20160234589A1 (en) Audio apparatus and methods
US20150350759A1 (en) Voltage control device for ear microphone
CN108540900B (en) Volume adjusting method and related product
JP6421120B2 (en) Binaural hearing system and method
JP2010527541A (en) Communication device with ambient noise reduction function
JP2017527148A (en) Method and headset for improving sound quality
EP3082348A1 (en) A device-adaptable audio headset
CN109155802B (en) Apparatus for producing an audio output
US10043535B2 (en) Method and device for spectral expansion for an audio signal
US20140294193A1 (en) Transducer apparatus with in-ear microphone
US11741985B2 (en) Method and device for spectral expansion for an audio signal
CN112013949A (en) Earphone wearing state determining method and device and earphone
JP6268033B2 (en) Mobile device
CN111800699B (en) Volume adjustment prompting method and device, earphone equipment and storage medium
WO2024032281A1 (en) Audio control method, wearable device, and electronic device
CN103200480A (en) Headset and working method thereof
CN111615036B (en) Data processing method and device and electronic equipment
KR20130135535A (en) Mobile terminal for storing sound control application
US11217268B2 (en) Real-time augmented hearing platform
KR102046803B1 (en) Hearing assistant system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant