CN115579011A - Identity recognition method and device, electronic equipment and storage medium - Google Patents

Identity recognition method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115579011A
CN115579011A CN202211349025.3A CN202211349025A CN115579011A CN 115579011 A CN115579011 A CN 115579011A CN 202211349025 A CN202211349025 A CN 202211349025A CN 115579011 A CN115579011 A CN 115579011A
Authority
CN
China
Prior art keywords
sound signal
target user
identity
parameter
sound
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211349025.3A
Other languages
Chinese (zh)
Inventor
周岭松
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Beijing Xiaomi Pinecone Electronic Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Beijing Xiaomi Pinecone Electronic Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd, Beijing Xiaomi Pinecone Electronic Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Priority to CN202211349025.3A priority Critical patent/CN115579011A/en
Publication of CN115579011A publication Critical patent/CN115579011A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification techniques
    • G10L17/02Preprocessing operations, e.g. segment selection; Pattern representation or modelling, e.g. based on linear discriminant analysis [LDA] or principal components; Feature selection or extraction
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification techniques
    • G10L17/04Training, enrolment or model building
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/03Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters
    • G10L25/21Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters the extracted parameters being power information
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • G10L2021/02082Noise filtering the noise being echo, reverberation of the speech

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Computational Linguistics (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The disclosure relates to an identity recognition method, an identity recognition device, an electronic device and a storage medium. The identity recognition method comprises the following steps: acquiring identification information corresponding to a target user, wherein the identification information at least comprises at least one of ear canal parameters, gait parameters and bone conduction sound parameters corresponding to the target user, the ear canal parameters represent ear canal characteristics of the target user, the gait parameters represent gait characteristics of the target user, and the bone conduction sound parameters represent bone conduction sound characteristics of the target user; calling an identity recognition model, and processing the recognition information to obtain the identity information of the target user; the identity recognition model is obtained based on recognition information and identity information training corresponding to a plurality of sample users. When the method is used for identity recognition, the adopted recognition information is not easily influenced by the external environment, so that the identity of the target user is recognized more accurately based on the recognition information, and the robustness is higher.

Description

Identity recognition method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to an identity recognition method and apparatus, an electronic device, and a storage medium.
Background
With the development of computer technology and artificial intelligence technology, user identification has important applications in the aspects of identity confirmation, personalized recommendation, employee management and the like. Voiceprint recognition is an important way in user identification, and is to confirm the identity of a user who utters voice by analyzing the voice of the user.
In the related art, a user speaks, then the electronic device collects the voice of the user, and performs identification based on the collected voice. However, when the environmental noise is large or the speech sound of the user is small, the collected speech is greatly influenced by the environment, the recognition accuracy is reduced, and the robustness is poor.
Disclosure of Invention
To overcome the problems in the related art, the present disclosure provides an identity recognition method, apparatus, electronic device, and storage medium.
According to a first aspect of the embodiments of the present disclosure, there is provided an identity recognition method, the method including:
acquiring identification information corresponding to a target user, wherein the identification information at least comprises at least one of an ear canal parameter, a gait parameter and a bone conduction sound parameter corresponding to the target user, the ear canal parameter represents ear canal characteristics of the target user, the gait parameter represents gait characteristics of the target user, and the bone conduction sound parameter represents bone conduction sound characteristics of the target user;
calling an identity recognition model, and processing the recognition information to obtain the identity information of the target user, wherein the identity information is an identifier which only represents the identity of the target user;
the identity recognition model is obtained based on recognition information and identity information training corresponding to a plurality of sample users.
In some embodiments, the identification information includes ear canal parameters, and the obtaining the identification information corresponding to the target user includes:
collecting a first sound signal and a second sound signal, wherein the first sound signal is a sound signal obtained after the second sound signal is reflected in the ear of the target user, and the second sound signal is a sound signal played in the ear of the target user;
determining the ear canal parameter based on the first sound signal and the second sound signal.
In some embodiments, said determining said ear canal parameter based on said first sound signal and said second sound signal comprises:
determining a cross power spectrum of the first sound signal and the second sound signal, and a self power spectrum of the first sound signal;
determining a ratio between the cross power spectrum and the self power spectrum as the ear canal parameter.
In some embodiments, the identification information includes gait parameters, and the acquiring identification information corresponding to the target user includes:
collecting a third sound signal in the ear of the target user;
performing low-pass filtering on the third sound signal to obtain a fourth sound signal, wherein the fourth sound signal comprises the stepping sound of the target user;
determining the gait parameter based on the fourth sound signal.
In some embodiments, said determining said gait parameter based on said fourth sound signal comprises:
carrying out peak value detection on the fourth sound signal to obtain a sound peak value in the fourth sound signal;
and obtaining the stepping cycle duration of the target user based on the duration of the interval between every two adjacent sound peaks, and determining the stepping cycle duration as the gait parameter.
In some embodiments, the obtaining the identification information corresponding to the target user includes:
collecting a fifth sound signal in the ear of the target user;
under the condition that the fifth sound signal comprises a second sound signal, performing echo cancellation on the fifth sound signal based on the second sound signal to obtain a sixth sound signal, wherein the second sound signal is a sound signal played in the ear of the target user, and the sixth sound signal is a sound signal obtained by bone conduction;
and performing feature extraction on the sixth sound signal to obtain the bone conduction sound parameter.
In some embodiments, the method further comprises:
and under the condition that the fifth sound signal does not comprise the second sound signal, performing feature extraction on the fifth sound signal to obtain the bone conduction sound signal.
In some embodiments, the obtaining of the identification information corresponding to the target user includes:
detecting whether the target user is in a moving state;
and when the target user is in the moving state, acquiring identification information corresponding to the target user.
In some embodiments, the invoking an identity recognition model and processing the recognition information to obtain the identity information of the target user includes:
and inputting the identification information into the identity recognition model to obtain an identity output by the identity recognition model.
In some embodiments, the training process of the identity recognition model comprises:
acquiring identification information and identity information corresponding to the plurality of sample users;
calling an identity recognition model to be trained, and processing recognition information corresponding to the sample user to obtain predicted identity information corresponding to the sample user;
and adjusting the model parameters of the identity recognition model to be trained based on the predicted identity information and the identity information corresponding to the sample user.
In some embodiments, the method further comprises:
acquiring recommendation information matched with the identity information, wherein the recommendation information indicates audio data recommended to a user corresponding to the identity information;
and playing the audio data indicated by the recommendation information based on the recommendation information.
According to a second aspect of the embodiments of the present disclosure, there is provided an identification apparatus, the apparatus comprising:
the identification information acquisition module is configured to acquire identification information corresponding to a target user, wherein the identification information at least comprises at least one of an ear canal parameter, a gait parameter and a bone conduction sound parameter corresponding to the target user, the ear canal parameter represents ear canal characteristics of the target user, the gait parameter represents gait characteristics of the target user, and the bone conduction sound parameter represents bone conduction sound characteristics of the target user;
the identity recognition module is configured to call an identity recognition model, process the recognition information and obtain the identity information of the target user, wherein the identity information is an identifier which uniquely represents the identity of the target user;
the identity recognition model is obtained based on recognition information and identity information training corresponding to a plurality of sample users.
In some embodiments, the identification information includes ear canal parameters, and the identification information obtaining module includes:
a first collecting unit configured to collect a first sound signal and a second sound signal, wherein the first sound signal is a sound signal obtained after the second sound signal is reflected in the ear of the target user, and the second sound signal is a sound signal played in the ear of the target user;
a first parameter determination unit configured to determine the ear canal parameter based on the first sound signal and the second sound signal.
In some embodiments, the first parameter determination unit is configured to:
determining a cross power spectrum of the first sound signal and the second sound signal, and a self power spectrum of the first sound signal;
determining a ratio between the cross power spectrum and the self power spectrum as the ear canal parameter.
In some embodiments, the identification information comprises gait parameters, and the identification information acquisition module comprises:
a second acquisition unit configured to acquire a third sound signal within an ear of the target user;
a filtering unit configured to low-pass filter the third sound signal to obtain a fourth sound signal, where the fourth sound signal includes the stepping sound of the target user;
a second parameter determination unit configured to determine the gait parameter based on the fourth sound signal.
In some embodiments, the second parameter determination unit is configured to:
carrying out peak value detection on the fourth sound signal to obtain a sound peak value in the fourth sound signal;
and obtaining the stepping cycle duration of the target user based on the duration of the interval between every two adjacent sound peaks, and determining the stepping cycle duration as the gait parameter.
In some embodiments, the identification information includes the bone conduction sound parameter, and the identification information obtaining module includes:
a third acquisition unit configured to acquire a fifth sound signal within an ear of the target user;
an echo cancellation unit configured to, if the fifth sound signal includes a second sound signal, perform echo cancellation on the fifth sound signal based on the second sound signal to obtain a sixth sound signal, where the second sound signal is a sound signal played in the ear of the target user, and the sixth sound signal is a sound signal obtained by bone conduction;
a third parameter determining unit configured to perform feature extraction on the sixth sound signal to obtain the bone conduction sound parameter.
In some embodiments, the third parameter determination unit is further configured to, in a case that the fifth sound signal does not include the second sound signal, perform feature extraction on the fifth sound signal to obtain the bone conduction sound signal.
In some embodiments, the identification information acquisition module is further configured to:
detecting whether the target user is in a moving state;
and when the target user is in the moving state, acquiring identification information corresponding to the target user.
In some embodiments, the identification module is configured to input the identification information into the identification model, and obtain the identification output by the identification model.
In some embodiments, the training process of the identity recognition model comprises:
acquiring identification information and identity information corresponding to the plurality of sample users;
calling an identity recognition model to be trained, and processing recognition information corresponding to the sample user to obtain predicted identity information corresponding to the sample user;
and adjusting the model parameters of the identity recognition model to be trained based on the predicted identity information and the identity information corresponding to the sample user.
In some embodiments, the apparatus further comprises:
the recommending module is configured to acquire recommending information matched with the identity information, and the recommending information indicates audio data recommended to a user corresponding to the identity information;
the recommendation module is further configured to play audio data indicated by the recommendation information based on the recommendation information.
According to a third aspect of the embodiments of the present disclosure, there is provided an electronic apparatus including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to perform the identification method as described in any one of the first aspects of the embodiments of the present disclosure.
According to a fourth aspect of embodiments of the present disclosure, there is provided a non-transitory computer-readable storage medium, wherein instructions, when executed by a processor of an electronic device, enable the electronic device to perform the identity recognition method according to any one of the first aspect of the embodiments of the present disclosure.
By adopting the method disclosed by the invention, the following beneficial effects are achieved:
the identity recognition method provided by the embodiment of the disclosure calls the identity recognition model, and recognizes the identity of the target user based on the recognition information corresponding to the target user, and since the recognition information is at least one of ear canal parameters, gait parameters and bone conduction sound parameters, which are parameters obtained in the ear of the target user, compared with the voice in the related art, the parameters are not easily affected by the external environment, so that the identity of the target user is recognized more accurately based on the parameters, and the robustness is higher.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
FIG. 1 is a flow chart illustrating a method of identity recognition in accordance with an exemplary embodiment;
FIG. 2 is a flow chart illustrating a method of identity recognition in accordance with an exemplary embodiment;
FIG. 3 is a schematic diagram illustrating a wearable headset according to an exemplary embodiment;
FIG. 4 is a schematic diagram illustrating a fourth sound signal in accordance with an exemplary embodiment;
FIG. 5 is a flow chart illustrating a recommendation method in accordance with an exemplary embodiment;
FIG. 6 is an apparatus block diagram illustrating an identification appliance in accordance with an exemplary embodiment;
FIG. 7 is a block diagram illustrating an electronic device in accordance with an exemplary embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. The following description refers to the accompanying drawings in which the same numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present invention. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the invention, as detailed in the appended claims.
The identity recognition method provided by the embodiment of the present disclosure is executed by an electronic device, and optionally, the electronic device is a smart phone, a tablet computer, a notebook computer, a desktop computer, a bluetooth sound, a vehicle-mounted terminal, a smart home device, or the like, and the electronic device may also be referred to as a user equipment, a portable terminal, a laptop terminal, a desktop terminal, or the like.
Fig. 1 is a flow chart illustrating a method of identification, performed by an electronic device, according to an exemplary embodiment, the method comprising the steps of, referring to fig. 1:
step S101, obtaining identification information corresponding to a target user, wherein the identification information at least comprises at least one of ear canal parameters, gait parameters and bone conduction sound parameters corresponding to the target user.
The target user is any user needing identity recognition. The identification information is used for describing the characteristic of the target user, so that the target user can be identified based on the identification information corresponding to the target user.
The ear canal parameters represent ear canal characteristics of the target user, and because the ear canals of different users are not identical, the ear canal parameters corresponding to the target user can also be distinguished from the ear canal parameters of other users, so that the target user can be identified through the ear canal parameters corresponding to the target user; the gait parameters represent the gait characteristics of the target user, and because the walking postures or running postures of different users are not completely the same, the gait parameters determined according to the sound generated when the target user walks or runs can represent the walking posture or running posture of the target user, so that the target user can be identified by the corresponding gait parameters of the target user; bone conduction sound parameter indicates target user's bone conduction sound characteristic, and the sound that different users sent is different, and compares with the sound that the user that directly gathers, and the bone conduction sound parameter of gathering through bone conduction is more difficult to receive external environment's influence, consequently, can carry out identification to target user through the bone conduction sound parameter that target user corresponds, and the discernment rate of accuracy is higher.
In the embodiment of the present disclosure, during the subsequent identification, any one, or any two, or all three of the ear canal parameter, the gait parameter, and the bone conduction sound parameter may be used, which is not limited in the embodiment of the present disclosure.
And step S102, calling an identity recognition model, and processing the recognition information to obtain the identity information of the target user.
The identification model is obtained by training based on identification information and identity information corresponding to a plurality of sample users, the identification information corresponding to the sample users is the same as parameters included in the identification information corresponding to the target user, namely the identification information corresponding to the sample users includes at least one of ear canal parameters, gait parameters and bone conduction sound parameters corresponding to the sample users. The Identity information of the target user is an identifier that uniquely represents the Identity of the target user, for example, the Identity information is a user identifier, a user ID (Identity Document, identity card identification number), a user name, a user registration number, and the like.
According to the identity recognition method provided by the embodiment of the disclosure, the identity recognition model is called, the identity of the target user is recognized based on the recognition information corresponding to the target user, the recognition information is at least one of the ear canal parameter, the gait parameter and the bone conduction sound parameter, the parameters are all parameters acquired in the ear of the target user, and compared with voice in the related technology, the parameters are not easily influenced by the external environment, so that the identity of the target user is recognized more accurately based on the parameters, and the robustness is higher.
Before the introduction of the identity recognition process, an application scenario of the identity recognition method provided in the embodiment of the present disclosure is briefly described:
after the target user wears the earphone, the earphone collects identification information, the earphone can be in communication connection with the electronic device, then the electronic device determines identity information of the target user based on the collected identification information, and then the electronic device can recommend audio data to the target user in a personalized mode based on the identified identity information. The earphone may be a wired earphone, a bluetooth earphone, a TWS (True Wireless Stereo) earphone, or the like.
In the embodiment of fig. 2, the identification information includes at least one of the ear canal parameter, the gait parameter, and the bone conduction sound parameter corresponding to the target user, and the identification process is described below with reference to the embodiment of fig. 2, where the identification information includes the ear canal parameter, the gait parameter, and the bone conduction sound parameter corresponding to the target user.
Fig. 2 is a flow chart illustrating a method of identification, according to an exemplary embodiment, performed by an electronic device, see fig. 2, the method comprising the steps of:
step S201, a first sound signal and a second sound signal in the ear of the target user are collected.
The second sound signal is a sound signal played in the ear of the target user, and the first sound signal is a sound signal obtained after the second sound signal is reflected in the ear of the target user.
In the embodiment of the present disclosure, acquiring the second sound signal means: directly using the played sound signal as a second sound signal, and acquiring the first sound signal means: the second sound signal is reflected in the ear of the target user, and the reflected sound signal is collected to serve as the first sound signal. That is, the first sound signal is a sound signal obtained by subjecting the second sound signal to a series of changes in the ear of the target user. For example, referring to the schematic diagram of fig. 3, a sound signal (second sound signal) is played in the ear of the target user through a speaker on the earphone, then the played sound signal is reflected in the ear, and then the reflected sound signal (first sound signal) is collected through a feedback microphone on the earphone.
The second sound signal may be any sound signal played, for example, the second sound signal is music played, phase sound, and the like.
In some embodiments, the first sound signal and the second sound signal are collected upon detecting that the target user is wearing the headset and beginning to play the sound signal. Moreover, for wearing the earphone each time, the first sound signal and the second sound signal need to be collected again for wearing the earphone each time, so as to ensure that the subsequently determined identification information is the identification information corresponding to the target user wearing the earphone at present.
Step S202, determining the ear canal parameter corresponding to the target user based on the first sound signal and the second sound signal.
Wherein the ear canal parameters represent ear canal characteristics of the target user. Since the first sound signal is obtained by reflecting the second sound signal in the ear of the target user, and the sound propagation path of the reflected second sound signal is related to the ear canal structure of the target user, the ear canal structure of each user is different, so that the sound signals of the same second sound signal reflected in different ears of the users are different, and the determined ear canal parameters can represent the ear canal characteristics of the target user based on the first sound signal and the second sound signal.
In some embodiments, a cross power spectrum of the first sound signal and the second sound signal, and an auto power spectrum of the first sound signal are determined; the ratio between the cross power spectrum and the self power spectrum is determined as the ear canal parameter. Wherein the cross power spectrum represents the degree of correlation between the first sound signal and the second sound signal in the frequency domain, the self power spectrum represents the degree of waveform similarity of the first sound signal at different time instants, the ear canal parameter may also be referred to as a frequency response curve, and the ear canal parameter represents the characteristics of the sound propagation path of the second sound signal reflected to the feedback microphone through the ear of the target user, that is, represents the characteristics of the ear canal.
For example, the ear canal parameters are determined using the following formula:
Figure BDA0003919094230000081
wherein H AB Representing ear canal parameters, G AB Representing the cross power spectrum, G, of the first and second sound signals AA Representing the self-power spectrum of the first sound signal.
Optionally, a periodogram method, an average periodogram method, an autocorrelation method, or another method may be used to determine the cross-power spectrum and the self-power spectrum.
Step S203, a third sound signal in the ear of the target user is collected.
Due to the occlusion effect of the earphone, the stepping sound generated when the user walks or runs is conducted into the ear through the bone, and different bones of the user, stepping postures and the like are different, so that the path of the stepping sound conducted into the ear of the user through the bone is different, and the stepping sound can be used for identifying the identity of the user. In the embodiment of the disclosure, when the target user walks or runs, a third sound signal in the ear of the target user is collected, where the third sound signal includes a stepping sound, and of course, the third sound signal also includes a disturbing sound, such as an external wind sound, a speaking sound, and the like.
In some embodiments, detecting whether the target user is in a moving state, wherein the moving state is a walking or running state; and when the target user is in the moving state, acquiring the third sound signal, and when the target user is not in the moving state, not acquiring the third sound signal.
Optionally, when the electronic device is worn on the body of the target user or the target user holds the electronic device, whether the target user is in a moving state is detected through a gyroscope and an acceleration sensor in the electronic device, and when the target user is detected to be in the moving state, the electronic device sends a signal acquisition instruction to a connected earphone and acquires a third sound signal through the earphone. Of course, other manners may also be used to detect whether the target user is in a moving state, and the embodiment of the present disclosure does not limit the implementation manner of detecting whether the target user is in a moving state.
In some embodiments, if the target user walks or runs while the first sound signal is being collected, the stepping sound generated by walking or running is also conducted into the ear of the target user by bone conduction, and thus the first sound signal includes not only the second sound signal reflected in the ear but also the stepping sound obtained by bone conduction, in which case, the first sound signal may be directly used as the third sound signal.
And S204, performing low-pass filtering on the third sound signal to obtain a fourth sound signal, and determining the gait parameter corresponding to the target user based on the fourth sound signal.
In the embodiment of the present disclosure, since the frequency of the step sound generated by the collision between the sole of the target user and the ground is low, and usually the frequency of the step sound is below 50 hz, performing low-pass filtering on the third sound signal can remove the influence of the high-frequency sound signal in the third sound signal, for example, removing the external wind sound, the speaking sound, and the like, and certainly, in the case that the third sound signal is the first sound signal, the second sound signal after reflection can also be removed through low-pass filtering. The fourth sound signal obtained by the low-pass filtering includes the stepping sound of the target user.
In some embodiments, determining the gait parameter based on the fourth sound signal comprises: carrying out peak value detection on the fourth sound signal to obtain a sound peak value in the fourth sound signal; and obtaining the stepping cycle duration of the target user based on the duration of the interval between every two adjacent sound peak values, and determining the stepping cycle duration as a gait parameter. The sound peak refers to a peak in a frequency spectrum corresponding to the fourth sound signal.
Optionally, obtaining the mark time period of the target user based on the time length of the interval between every two adjacent sound peaks includes: and determining the time length of the interval between every two adjacent sound peak values, averaging the determined time lengths, and taking the obtained average value as the time length of the stepping period.
According to the embodiment of the disclosure, the target user has obvious periodicity when walking or running, so that the gait characteristics of the target user are represented by detecting the stepping cycle duration in the fourth sound signal as the gait parameter. For example, referring to the fourth sound signal shown in fig. 4, it can be seen from fig. 4 that the sound segments in the fourth sound signal have a periodic regularity.
Step S205, a fifth sound signal in the ear of the target user is collected.
In the embodiment of the disclosure, when the speaking sound of the target user is detected, the fifth sound signal is acquired, and the fifth sound signal is acquired through bone conduction instead of directly acquiring the speaking sound of the target user.
In some embodiments, if the target user is walking or running, the fifth collected sound signal also includes a step sound, but the step sound is different from the speaking sound of the target user and has less interference with the speaking sound.
Step S206, when the fifth sound signal includes the second sound signal, performing echo cancellation on the fifth sound signal based on the second sound signal to obtain a sixth sound signal, and performing feature extraction on the sixth sound signal to obtain a bone conduction sound parameter corresponding to the target user.
In the embodiment of the present disclosure, when the fifth sound signal is collected, if the second sound signal is being played in the ear, it indicates that the fifth sound signal includes the second sound signal, and at this time, the second sound signal in the fifth sound signal needs to be eliminated, so as to avoid interference of the second sound signal.
In some embodiments, an echo cancellation algorithm is used to perform echo cancellation on the fifth sound signal based on the second sound signal, resulting in a sixth sound signal. For example, the echo cancellation algorithm may be a least mean square algorithm, a normalized least mean square algorithm, or the like.
In some embodiments, the performing feature extraction on the sixth sound signal to obtain the bone conduction sound parameter corresponding to the target user includes: mel-Frequency Cepstral Coefficients (MFCCs) are extracted based on the sixth sound signal, and the Mel-Frequency Cepstral Coefficients are used as bone conduction sound parameters. The mel-frequency cepstrum coefficient is in a nonlinear correspondence with the hertz frequency in the sixth sound signal and can represent the hertz frequency spectrum feature, so that the mel-frequency cepstrum coefficient can represent the bone conduction sound feature. Of course, in some embodiments, a cepstrum coefficient, an LPCC (Linear Prediction Cepstral Coefficients), or other parameters capable of representing sound characteristics may also be extracted as the bone conduction sound parameter based on the sixth sound signal.
In another embodiment, in a case that the fifth sound signal does not include the second sound signal, the fifth sound signal is subjected to feature extraction to obtain a bone conduction sound signal. The manner of extracting the features of the fifth sound signal is the same as the manner of extracting the features of the sixth sound signal, and is not described herein again.
It should be noted that, in the embodiment of the present disclosure, the execution sequence of the steps S201 to S206 is merely taken as an example for description, in another embodiment, the steps S203 to S204 may be executed first, and then the steps S201 to S202 and the steps S205 to S206 are executed, or the steps S205 to S206 are executed first, and then the steps S201 to S204 are executed, and the embodiment of the present disclosure does not limit the order of determining the ear canal parameter, the gait parameter and the bone conduction sound parameter.
In addition, it should be noted that, in the foregoing embodiment, it is described that, in some embodiments, the third sound signal needs to be collected when the target user is in a moving state. In another embodiment, in order to ensure that the ear canal parameter, the gait parameter and the bone conduction sound parameter are obtained in real time, whether the target user is in a moving state is detected, and when the target user is in the moving state, the identification information corresponding to the target user is obtained, namely the ear canal parameter, the gait acceptance number and the bone conduction sound parameter are obtained. The embodiment of detecting whether the target user is in a moving state is the same as the above embodiment, and is not described herein again.
Step S207, calling an identity recognition model, and processing the ear canal parameters, the gait parameters and the bone conduction sound parameters to obtain the identity information of the target user.
The identity recognition model in the embodiment of the present disclosure is obtained based on the recognition information and the identity information training corresponding to a plurality of sample users. The identification information corresponding to the sample user comprises ear canal parameters, gait parameters and bone conduction sound parameters corresponding to the sample user. The training process of the identity recognition model comprises the following steps: acquiring identification information and identity information corresponding to a plurality of sample users; calling an identity recognition model to be trained, and processing recognition information corresponding to a user to obtain predicted identity information corresponding to a sample user; and adjusting the model parameters of the identity recognition model to be trained based on the corresponding predicted identity information and the identity information of the sample user. The identity information may be an identity label.
In some embodiments, the identification information is input to the identity recognition model, that is, the ear canal parameter, the gait parameter and the bone conduction sound parameter are input to the identity recognition model, so as to obtain an identity output by the identity recognition model, where the identity is the identity information of the target user.
It should be noted that, in the embodiment of the present disclosure, only the identification of the target user is described as an example based on the ear canal parameter, the gait parameter and the bone conduction sound parameter as the identification information, in another embodiment, the ear canal parameter is used as the identification information, that is, the above steps S201 to S202 are executed, and then the identification model is called to process the ear canal parameter to obtain the identification information of the target user; or, taking the gait parameters as identification information, namely executing the steps S203-S204, then calling an identity identification model, and processing the gait parameters to obtain the identity information of the target user; or, taking the bone conduction sound parameter as the identification information, namely executing the steps S205-S206, and then calling an identity recognition model to process the bone conduction sound parameter to obtain the identity information of the target user; or, the ear canal parameter and the gait parameter are taken as identification information, namely, the steps S201 to S204 are executed, then an identity identification model is called, and the ear canal parameter and the gait parameter are processed to obtain the identity information of the target user; or, taking the gait parameter and the bone conduction sound parameter as identification information, namely executing the steps S203-S206, and then calling an identity identification model to process the gait parameter and the bone conduction sound parameter to obtain the identity information of the target user; or, the ear canal parameter and the bone conduction sound parameter are used as identification information, that is, the steps S201 to S202 and the steps S205 to S206 are executed, and then the identification model is called to process the ear canal parameter and the bone conduction sound parameter, so as to obtain the identification information of the target user.
In addition, in any of the above recognition methods, the parameters included in the identification information corresponding to the sample user used in the training process of the identification model are the same as the parameters included in the identification information in the recognition method, and for example, when the gait parameters are used as the identification information, the corresponding identification model needs to be trained based on the gait parameters corresponding to the sample user during the training process.
The identity recognition method provided by the embodiment of the disclosure calls the identity recognition model, and recognizes the identity of the target user based on the ear canal parameter, the gait parameter and the bone conduction sound parameter corresponding to the target user, wherein the parameters are parameters acquired in the ear of the target user.
While the above-mentioned embodiment shown in fig. 2 describes a process of identifying identity information of a target user, in some embodiments, after identifying the identity information of the target user, the electronic device may also recommend to the target user based on the identified identity information.
Fig. 5 is a flow chart illustrating a recommendation method, performed by an electronic device, according to an exemplary embodiment, and referring to fig. 5, the method includes the steps of:
step S501, acquiring identification information corresponding to a target user, where the identification information at least includes at least one of an ear canal parameter, a gait parameter and a bone conduction sound parameter corresponding to the target user.
Step S502, calling an identity recognition model, and processing the recognition information to obtain the identity information of the target user.
The implementation of steps S501 to S502 is the same as the implementation of steps S201 to S207, and is not described herein again.
Step S503, acquiring recommendation information matched with the identity information, wherein the recommendation information indicates audio data recommended to the user corresponding to the identity information.
The electronic equipment stores recommendation information corresponding to each identity information, and after the identity information is obtained, the recommendation information matched with the current identity information can be determined according to the corresponding relation between the identity information and the recommendation information. The recommended audio data may be music, commentary, etc. The recommendation information may be a user tag, a user characteristic, or other information indicating user preferences.
Step S504, based on the recommendation information, playing the audio data indicated by the recommendation information.
The electronic equipment plays the audio data indicated by the information, personalized audio data is recommended to the target user, the played audio data is the audio data liked by the target user, and therefore the use experience of the target user is improved.
According to the method provided by the embodiment of the disclosure, the identity recognition model is called, the identity of the target user is recognized based on the recognition information corresponding to the target user, the recognition information is at least one of ear canal parameters, gait parameters and bone conduction sound parameters, the parameters are parameters acquired in ears of the target user, and compared with voice in the related technology, the parameters are not easily affected by the external environment, so that the identity of the target user is recognized based on the parameters more accurately, and the robustness is higher. And determining recommendation information matched with the identity information based on the identified identity information, and playing audio data for the target user based on the recommendation information in a personalized manner so as to improve the use experience of the target user.
Fig. 6 is a block diagram illustrating an apparatus of an identification apparatus according to an exemplary embodiment, referring to fig. 6, the apparatus including:
the identification information acquisition module 601 is configured to acquire identification information corresponding to a target user, where the identification information at least includes at least one of an ear canal parameter, a gait parameter and a bone conduction sound parameter corresponding to the target user, the ear canal parameter represents ear canal characteristics of the target user, the gait parameter represents gait characteristics of the target user, and the bone conduction sound parameter represents bone conduction sound characteristics of the target user;
the identity recognition module 602 is configured to invoke an identity recognition model, and process the recognition information to obtain identity information of the target user, where the identity information is an identifier uniquely representing the identity of the target user;
the identity recognition model is obtained based on recognition information and identity information training corresponding to a plurality of sample users.
The device provided by the embodiment of the disclosure calls the identity recognition model, and recognizes the identity of the target user based on the recognition information corresponding to the target user, and since the recognition information is at least one of the ear canal parameter, the gait parameter and the bone conduction sound parameter, which are parameters obtained through the ear of the target user, compared with the voice in the related art, the parameters are not easily affected by the external environment, so that the identity of the target user is recognized more accurately based on the parameters, and the robustness is higher.
In some embodiments, the identification information includes ear canal parameters, and the identification information obtaining module 601 includes:
the first acquisition unit is configured to acquire a first sound signal and a second sound signal, wherein the first sound signal is a sound signal obtained after the second sound signal is reflected in the ear of the target user, and the second sound signal is a sound signal played in the ear of the target user;
a first parameter determination unit configured to determine an ear canal parameter based on the first sound signal and the second sound signal.
In some embodiments, the first parameter determination unit is configured to:
determining cross power spectra of the first and second sound signals, and an auto power spectrum of the first sound signal;
the ratio between the cross power spectrum and the self power spectrum is determined as the ear canal parameter.
In some embodiments, the identification information includes gait parameters, and the identification information obtaining module 601 includes:
a second collecting unit configured to collect a third sound signal in an ear of the target user;
a filtering unit configured to perform low-pass filtering on the third sound signal to obtain a fourth sound signal, wherein the fourth sound signal comprises the stepping sound of the target user;
a second parameter determination unit configured to determine the gait parameter based on the fourth sound signal.
In some embodiments, the second parameter determination unit is configured to:
carrying out peak value detection on the fourth sound signal to obtain a sound peak value in the fourth sound signal;
and obtaining the stepping cycle duration of the target user based on the duration of the interval between every two adjacent sound peaks, and determining the stepping cycle duration as the gait parameter.
In some embodiments, the identification information includes bone conduction sound parameters, and the identification information obtaining module 601 includes:
a third collecting unit configured to collect a fifth sound signal in the ear of the target user;
the echo cancellation unit is configured to perform echo cancellation on the fifth sound signal based on the second sound signal to obtain a sixth sound signal when the fifth sound signal comprises the second sound signal, wherein the second sound signal is a sound signal played in the ear of the target user, and the sixth sound signal is a bone conduction-obtained sound signal;
and the third parameter determining unit is configured to perform feature extraction on the sixth sound signal to obtain the bone conduction sound parameter.
In some embodiments, the third parameter determination unit is further configured to, in a case that the fifth sound signal does not include the second sound signal, perform feature extraction on the fifth sound signal, resulting in a bone conduction sound signal.
In some embodiments, the identification information acquisition module 601 is further configured to:
detecting whether a target user is in a moving state;
and when the target user is in a moving state, acquiring identification information corresponding to the target user.
In some embodiments, the identity recognition module 602 is configured to input the recognition information into the identity recognition model, and obtain the identity output by the identity recognition model.
In some embodiments, the training process of the identity recognition model comprises:
acquiring identification information and identity information corresponding to a plurality of sample users;
calling an identity recognition model to be trained, and processing recognition information corresponding to a sample user to obtain predicted identity information corresponding to the sample user;
and adjusting the model parameters of the identity recognition model to be trained based on the corresponding predicted identity information and the identity information of the sample user.
In some embodiments, the apparatus further comprises:
the recommendation module is configured to acquire recommendation information matched with the identity information, and the recommendation information indicates audio data recommended to a user corresponding to the identity information;
and the recommendation module is also configured to play the audio data indicated by the recommendation information based on the recommendation information.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
An embodiment of the present disclosure further provides an electronic device, including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to execute the identification method in the above embodiments.
Fig. 7 is a block diagram of an electronic device 700 shown in accordance with an example embodiment.
Referring to fig. 7, electronic device 700 may include one or more of the following components: processing component 702, memory 704, power component 706, multimedia component 708, audio component 710, input/output (I/O) interface 712, sensor component 714, and communications component 716.
The processing component 702 generally controls overall operation of the electronic device 700, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 702 may include one or more processors 720 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 702 may include one or more modules that facilitate interaction between the processing component 702 and other components. For example, the processing component 702 may include a multimedia module to facilitate interaction between the multimedia component 708 and the processing component 702.
The memory 704 is configured to store various types of data to support operations at the electronic device 700. Examples of such data include instructions for any application or method operating on the electronic device 700, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 704 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power component 706 provides power to the various components of the electronic device 700. The power components 706 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the electronic device 700.
The multimedia component 708 includes a screen that provides an output interface between the electronic device 700 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 708 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the electronic device 700 is in an operation mode, such as a photographing mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 710 is configured to output and/or input audio signals. For example, the audio component 710 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 700 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signal may further be stored in the memory 704 or transmitted via the communication component 716. In some embodiments, audio component 710 also includes a speaker for outputting audio signals.
The I/O interface 712 provides an interface between the processing component 702 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 714 includes one or more sensors for providing various aspects of status assessment for the electronic device 700. For example, the sensor assembly 714 may detect an open/closed state of the electronic device 700, the relative positioning of components, such as a display and keypad of the electronic device 700, the sensor assembly 714 may also detect a change in the position of the electronic device 700 or a component of the electronic device 700, the presence or absence of user contact with the electronic device 700, orientation or acceleration/deceleration of the electronic device 700, and a change in the temperature of the electronic device 700. The sensor assembly 714 may include a proximity sensor configured to detect the presence of nearby objects in the absence of any physical contact. The sensor assembly 714 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 714 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 716 is configured to facilitate wired or wireless communication between the electronic device 700 and other devices. The electronic device 700 may access a wireless network based on a communication standard, such as WiFi,2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 716 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 716 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 700 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors, or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer readable storage medium comprising instructions, such as the memory 704 comprising instructions, executable by the processor 720 of the electronic device 700 to perform the above-described method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
The disclosed embodiments also provide a non-transitory computer-readable storage medium, where instructions in the storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the identity recognition method in the above embodiments.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.
It will be understood that the invention is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the invention is limited only by the appended claims.

Claims (14)

1. An identity recognition method, the method comprising:
acquiring identification information corresponding to a target user, wherein the identification information at least comprises at least one of an ear canal parameter, a gait parameter and a bone conduction sound parameter corresponding to the target user, the ear canal parameter represents ear canal characteristics of the target user, the gait parameter represents gait characteristics of the target user, and the bone conduction sound parameter represents bone conduction sound characteristics of the target user;
calling an identity recognition model, and processing the recognition information to obtain identity information of the target user, wherein the identity information is an identifier which uniquely represents the identity of the target user;
the identity recognition model is obtained based on recognition information and identity information training corresponding to a plurality of sample users.
2. The method according to claim 1, wherein the identification information includes ear canal parameters, and the obtaining the identification information corresponding to the target user includes:
collecting a first sound signal and a second sound signal, wherein the first sound signal is a sound signal obtained after the second sound signal is reflected in the ear of the target user, and the second sound signal is a sound signal played in the ear of the target user;
determining the ear canal parameter based on the first sound signal and the second sound signal.
3. The method of claim 2, wherein determining the ear canal parameter based on the first sound signal and the second sound signal comprises:
determining a cross power spectrum of the first sound signal and the second sound signal, and a self power spectrum of the first sound signal;
determining a ratio between the cross power spectrum and the self power spectrum as the ear canal parameter.
4. The identity recognition method according to claim 1, wherein the identification information includes gait parameters, and the acquiring the identification information corresponding to the target user includes:
collecting a third sound signal in the ear of the target user;
performing low-pass filtering on the third sound signal to obtain a fourth sound signal, wherein the fourth sound signal comprises the stepping sound of the target user;
determining the gait parameter based on the fourth sound signal.
5. The method according to claim 4, wherein the determining the gait parameters based on the fourth sound signal comprises:
carrying out peak value detection on the fourth sound signal to obtain a sound peak value in the fourth sound signal;
and obtaining the stepping cycle duration of the target user based on the duration of the interval between every two adjacent sound peaks, and determining the stepping cycle duration as the gait parameter.
6. The identity recognition method according to claim 1, wherein the identification information includes the bone conduction sound parameter, and the obtaining of the identification information corresponding to the target user includes:
collecting a fifth sound signal in the ear of the target user;
under the condition that the fifth sound signal comprises a second sound signal, performing echo cancellation on the fifth sound signal based on the second sound signal to obtain a sixth sound signal, wherein the second sound signal is a sound signal played in the ear of the target user, and the sixth sound signal is a sound signal obtained through bone conduction;
and performing feature extraction on the sixth sound signal to obtain the bone conduction sound parameter.
7. The method of claim 6, further comprising:
and under the condition that the fifth sound signal does not comprise the second sound signal, performing feature extraction on the fifth sound signal to obtain the bone conduction sound signal.
8. The method according to claim 1, wherein the obtaining of the identification information corresponding to the target user comprises:
detecting whether the target user is in a moving state;
and when the target user is in the moving state, acquiring identification information corresponding to the target user.
9. The identity recognition method of claim 1, wherein the invoking of the identity recognition model and the processing of the recognition information to obtain the identity information of the target user comprises:
and inputting the identification information into the identity recognition model to obtain an identity output by the identity recognition model.
10. The identity recognition method of claim 1, wherein the training process of the identity recognition model comprises:
acquiring identification information and identity information corresponding to the plurality of sample users;
calling an identity recognition model to be trained, and processing recognition information corresponding to the sample user to obtain predicted identity information corresponding to the sample user;
and adjusting the model parameters of the identity recognition model to be trained based on the predicted identity information and the identity information corresponding to the sample user.
11. The method of claim 1, further comprising:
acquiring recommendation information matched with the identity information, wherein the recommendation information indicates audio data recommended to a user corresponding to the identity information;
and playing the audio data indicated by the recommendation information based on the recommendation information.
12. An identification device, the device comprising:
the identification information acquisition module is configured to acquire identification information corresponding to a target user, wherein the identification information at least comprises at least one of an ear canal parameter, a gait parameter and a bone conduction sound parameter corresponding to the target user, the ear canal parameter represents ear canal characteristics of the target user, the gait parameter represents gait characteristics of the target user, and the bone conduction sound parameter represents bone conduction sound characteristics of the target user;
the identity recognition module is configured to call an identity recognition model, process the recognition information and obtain the identity information of the target user, wherein the identity information is an identifier which uniquely represents the identity of the target user;
the identity recognition model is obtained based on recognition information and identity information training corresponding to a plurality of sample users.
13. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to perform the identification method of any of claims 1-11.
14. A non-transitory computer readable storage medium, wherein instructions in the storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the identification method of any of claims 1-11.
CN202211349025.3A 2022-10-31 2022-10-31 Identity recognition method and device, electronic equipment and storage medium Pending CN115579011A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211349025.3A CN115579011A (en) 2022-10-31 2022-10-31 Identity recognition method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211349025.3A CN115579011A (en) 2022-10-31 2022-10-31 Identity recognition method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115579011A true CN115579011A (en) 2023-01-06

Family

ID=84589458

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211349025.3A Pending CN115579011A (en) 2022-10-31 2022-10-31 Identity recognition method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115579011A (en)

Similar Documents

Publication Publication Date Title
CN109446876B (en) Sign language information processing method and device, electronic equipment and readable storage medium
CN105282345B (en) The adjusting method and device of In Call
CN108762494B (en) Method, device and storage medium for displaying information
CN110890083B (en) Audio data processing method and device, electronic equipment and storage medium
CN107945806B (en) User identification method and device based on sound characteristics
CN109360197B (en) Image processing method and device, electronic equipment and storage medium
CN106600530B (en) Picture synthesis method and device
CN106409317B (en) Method and device for extracting dream speech
CN110648656A (en) Voice endpoint detection method and device, electronic equipment and storage medium
CN112188091B (en) Face information identification method and device, electronic equipment and storage medium
CN113113044B (en) Audio processing method and device, terminal and storage medium
CN114363770A (en) Filtering method and device in pass-through mode, earphone and readable storage medium
CN112820300A (en) Audio processing method and device, terminal and storage medium
CN115039169A (en) Voice instruction recognition method, electronic device and non-transitory computer readable storage medium
CN112201267A (en) Audio processing method and device, electronic equipment and storage medium
CN114040309B (en) Wind noise detection method and device, electronic equipment and storage medium
CN109102813B (en) Voiceprint recognition method and device, electronic equipment and storage medium
CN114095817B (en) Noise reduction method and device for earphone, earphone and storage medium
CN115579011A (en) Identity recognition method and device, electronic equipment and storage medium
JP2024510779A (en) Voice control method and device
CN117642817A (en) Method, device and storage medium for identifying audio data category
CN112882394A (en) Device control method, control apparatus, and readable storage medium
CN109102810B (en) Voiceprint recognition method and device
CN116530944B (en) Sound processing method and electronic equipment
CN115035886B (en) Voiceprint recognition method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination