CN108776795A - Method for identifying ID, device and terminal device - Google Patents
Method for identifying ID, device and terminal device Download PDFInfo
- Publication number
- CN108776795A CN108776795A CN201810639690.3A CN201810639690A CN108776795A CN 108776795 A CN108776795 A CN 108776795A CN 201810639690 A CN201810639690 A CN 201810639690A CN 108776795 A CN108776795 A CN 108776795A
- Authority
- CN
- China
- Prior art keywords
- feature
- coefficient
- matching degree
- user
- voice
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/12—Fingerprints or palmprints
- G06V40/1347—Preprocessing; Feature extraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/18—Eye characteristics, e.g. of the iris
- G06V40/193—Preprocessing; Feature extraction
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L17/00—Speaker identification or verification
- G10L17/02—Preprocessing operations, e.g. segment selection; Pattern representation or modelling, e.g. based on linear discriminant analysis [LDA] or principal components; Feature selection or extraction
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L17/00—Speaker identification or verification
- G10L17/06—Decision making techniques; Pattern matching strategies
Abstract
The present invention relates to technical field of information processing, provide method for identifying ID and terminal device.This method includes:Acquire facial picture, iris picture, fingerprint picture and the voice of user;Extract user's face feature, client iris feature, user fingerprints feature and user vocal feature;Wherein, user's face feature, client iris feature, user fingerprints feature and user vocal feature correspond to the first coefficient, the second coefficient, third coefficient and the 4th coefficient respectively;User's face feature, client iris feature, user fingerprints feature and user vocal feature are matched with default feature respectively to obtain the first matching degree, the second matching degree, third matching degree and the 4th matching degree;The final matching degree of the user is determined according to each matching degree and each coefficient, and user identity is identified according to final matching degree and the relationship of preset matching degree.The above method can improve the accuracy of identification.
Description
Technical field
The present invention relates to a kind of technical field of information processing more particularly to method for identifying ID, device and terminals to set
It is standby.
Background technology
With the extensive use of biometrics identification technology, in certain application fields more demanding to authentication, such as
Security protection, financial institution etc., it is proposed that the demand of multi-biological characteristic authentication.
There are commonly face characteristics to be combined with fingerprint for existing more authentication identification technologies, iris and face characteristic phase
It the methods of is combined with signature in conjunction with, fingerprint and realizes that the authentication to user identifies.And traditional authentication recognition methods
Although a variety of biological characteristics are combined together and are authenticated identification to user identity, generally existing error rate is higher, identifies
The relatively low problem of precision.
Invention content
In view of this, an embodiment of the present invention provides method for identifying ID, device and terminal device, to solve mesh
The problem that preceding authentication recognition methods generally existing error rate is higher, accuracy of identification is relatively low.
The first aspect of the embodiment of the present invention provides method for identifying ID, including:
It obtains username and password input by user and is verified, verified to username and password input by user
By rear, facial picture, iris picture, fingerprint picture and the voice of user are acquired;
The use of characterization User Identity is extracted from the facial picture, iris picture, fingerprint picture and voice respectively
Family facial characteristics, client iris feature, user fingerprints feature and user vocal feature;Wherein, user's face feature, Yong Huhong
Film feature, user fingerprints feature and user vocal feature correspond to the first coefficient, the second coefficient, third coefficient and the 4th system respectively
Number;
User's face feature is matched with default facial characteristics to obtain the first matching degree, by client iris feature with
Default iris feature is matched to obtain the second matching degree, and user fingerprints feature is matched to obtain with preset fingerprint feature
Third matching degree is matched with default phonetic feature user vocal feature to obtain the 4th matching degree,
According to the first matching degree, the second matching degree, third matching degree and the 4th matching degree and the first coefficient, the second system
Number, third coefficient and the 4th coefficient determine the final matching degree of the user, and according to the final matching degree and preset matching degree
Relationship user identity is identified.
The second aspect of the embodiment of the present invention provides user identity identification device, including:
Authentication module is obtained, for obtaining username and password input by user and being verified, to input by user
After username and password is verified, facial picture, iris picture, fingerprint picture and the voice of user are acquired;
Extraction module, for extraction characterization to be used from the facial picture, iris picture, fingerprint picture and voice respectively
User's face feature, client iris feature, user fingerprints feature and the user vocal feature of family identity;Wherein, user plane
Portion's feature, client iris feature, user fingerprints feature and user vocal feature correspond to the first coefficient, the second coefficient, third respectively
Coefficient and the 4th coefficient;
Matching module obtains the first matching degree for being matched user's face feature with default facial characteristics, will use
Family iris feature is matched to obtain the second matching degree with default iris feature, by user fingerprints feature and preset fingerprint feature
It is matched to obtain third matching degree, user vocal feature is matched with default phonetic feature to obtain the 4th matching degree;
Identification module, for according to the first matching degree, the second matching degree, third matching degree and the 4th matching degree, Yi Ji
One coefficient, the second coefficient, third coefficient and the 4th coefficient determine the final matching degree of the user, and according to the final matching
It spends and user identity is identified with the relationship of preset matching degree.
The third aspect of the embodiment of the present invention provides terminal device, including memory, processor and is stored in described
In memory and the computer program that can run on the processor, the processor execute real when the computer program
Method for identifying ID in existing first aspect.
Existing advantageous effect is the embodiment of the present invention compared with prior art:The embodiment of the present invention is inputted to user
Username and password be verified after, acquire facial picture, iris picture, fingerprint picture and the voice of user, then distinguish
The user's face feature of extraction characterization User Identity, user from facial picture, iris picture, fingerprint picture and voice
Iris feature, user fingerprints feature and user vocal feature;Then user's face feature is matched with default facial characteristics
The first matching degree is obtained, client iris feature is matched with default iris feature to obtain the second matching degree, by user fingerprints
Feature is matched to obtain third matching degree with preset fingerprint feature, by user vocal feature and the progress of default phonetic feature
With obtaining the 4th matching degree;Finally according to the first matching degree, the second matching degree, third matching degree and the 4th matching degree, Yi Ji
One coefficient, the second coefficient, third coefficient and the 4th coefficient determine the final matching degree of the user, and according to the final matching
It spends and user identity is identified with the relationship of preset matching degree, user name password and various biological characteristics can be combined
User identity is identified, so as to improving the accuracy to user identity identification.
Description of the drawings
To describe the technical solutions in the embodiments of the present invention more clearly, embodiment or the prior art will be retouched below
Attached drawing needed in stating is briefly described, it should be apparent that, the accompanying drawings in the following description is only the one of the present invention
A little embodiments for those of ordinary skill in the art without having to pay creative labor, can also basis
These attached drawings obtain other attached drawings.
Fig. 1 is the implementation flow chart of method for identifying ID provided by one embodiment of the present invention;
Fig. 2 is the user provided by one embodiment of the present invention from the facial picture extraction characterization User Identity
The implementation flow chart of facial characteristics;
Fig. 3 is provided by one embodiment of the present invention to be matched to obtain with default facial characteristics by user's face feature
The implementation flow chart of first matching degree;
Fig. 4 is provided by one embodiment of the present invention to be matched to obtain with default phonetic feature by user vocal feature
The implementation flow chart of 4th matching degree;
Fig. 5 is the implementation flow chart of step S401 in Fig. 4;
Fig. 6 is the schematic diagram of user identity identification device provided in an embodiment of the present invention;
Fig. 7 is the schematic diagram of terminal device provided in an embodiment of the present invention.
Specific implementation mode
In being described below, for illustration and not for limitation, it is proposed that such as particular system structure, technology etc
Detail, to understand thoroughly the embodiment of the present invention.However, it will be clear to one skilled in the art that in these no tools
The present invention can also be realized in the other embodiments of body details.In other situations, omit to well-known system, device,
The detailed description of circuit and method, in case unnecessary details interferes description of the invention.
In order to illustrate technical solutions according to the invention, illustrated below by specific embodiment.
Fig. 1 is the implementation flow chart of method for identifying ID provided by one embodiment of the present invention, and details are as follows:
Step S101 obtains username and password input by user and is verified, to user name input by user and
After password authentification passes through, facial picture, iris picture, fingerprint picture and the voice of user are acquired.
Wherein it is possible to by the facial picture, iris picture and fingerprint picture of image acquisition device user, pass through
Audio collecting device acquires the voice of user.
Step S102, extraction characterizes user's body from the facial picture, iris picture, fingerprint picture and voice respectively
User's face feature, client iris feature, user fingerprints feature and the user vocal feature of part mark;Wherein, user's face is special
Sign, client iris feature, user fingerprints feature and user vocal feature correspond to the first coefficient, the second coefficient, third coefficient respectively
With the 4th coefficient.
In one embodiment, referring to Fig. 2, the user plane from the facial picture extraction characterization User Identity
Portion's feature, includes the following steps:
Step S201 obtains user's face feature, the scale space images of structure characterization user's face feature, and detects
Characteristic point in the scale space images.
Wherein it is possible to pass through sift (Scale-invariant feature transform, Scale invariant features transform)
Algorithm acquires user's face characteristic information.
After the scale space images of structure characterization user's face feature, to the feature in the scale space images
Point is detected.In the present embodiment, the characteristic point can be extreme point, but be not limited thereto.
It, can be by the way that the Gaussian function of the facial picture of user and variable dimension be carried out convolution, to raw in this step
At the scale space images.Then, convolution is carried out using difference of Gaussian function and the facial picture of user, generates Gaussian difference
Partial image sequence.In difference of Gaussian image sequence, the current scale of each current pixel and neighborhood and adjacent scale etc. are compared
The maximum value and minimum value of multiple pixels, to obtain extreme point.
Step S202 is filtered and positions to each characteristic point in the scale space, and acquisition meets preset condition
Invariant feature point.
Wherein it is possible to which filter condition is arranged, each characteristic point in the scale space is filtered and is positioned, from
And the characteristic point that preset condition is not met in each characteristic point detected in step s 201 is removed, obtain invariant feature point.
Specifically, can be positioned to each characteristic point, to detect whether this feature point is marginal point, if this feature
Point is marginal point, then filters out this feature point, otherwise retains this feature point.
Step S203 generates the Feature Descriptor of user's face feature to each invariant feature point setting direction.
Specifically, it is each invariant feature point that can utilize the gradient direction distribution characteristic of invariant feature vertex neighborhood pixel
Setting direction makes the invariant feature point have rotational invariance.
Wherein, described is each invariant feature point setting direction, generates the Feature Descriptor tool of user's face feature
Body is:Take the neighborhood of default size as sampling window using centered on each invariant feature point, by sampled point with it is corresponding
The relative direction of the invariant feature point is included into direction histogram after being weighted by Gauss, obtains the Feature Descriptor.
For example, take the neighborhood of 16*16 as sampling window using centered on each invariant feature point, by sampled point with
The relative direction of corresponding invariant feature point is included into the direction histogram of 8 bin after being weighted by Gauss, obtain 4*4*8's
128 dimensional features description.
In addition, the method described in Fig. 2 can also be used to realize for the feature extraction of iris picture, fingerprint picture.
Step S103 is matched with default facial characteristics user's face feature to obtain the first matching degree, by user's rainbow
Film feature is matched to obtain the second matching degree with default iris feature, and user fingerprints feature and preset fingerprint feature are carried out
Matching obtains third matching degree, and user vocal feature is matched with default phonetic feature to obtain the 4th matching degree.
It is described to be matched user's face feature with default facial characteristics to obtain referring to Fig. 3 in one embodiment
One matching degree, includes the following steps:
Step S301 obtains second feature description for presetting facial characteristics.
In the present embodiment, the method described in Fig. 2 can be then used with prestored user face picture, obtain default face
Second feature description of corresponding three of feature or more.
For example, the detailed process for obtaining the Feature Descriptor for presetting facial characteristics in prestored user face picture is as follows:
The scale space images of user's face feature in institute's prestored user face picture are built, and it is empty to detect the scale
Between characteristic point in image;
Each characteristic point in the scale space is filtered and is positioned, obtains and meets the stable special of preset condition
Sign point;
To each invariant feature point setting direction, the Feature Descriptor of the user's face feature is generated.
Step S302 obtains three or more the fisrt feature nearest with the Euclidean distance of second feature description
Description.Fisrt feature description is the user's face spy extracted from the facial picture in step S201 to S203
The Feature Descriptor of sign.
Wherein, son is described for each second feature, with the Euclidean distance between corresponding each fisrt feature description
It can be calculated according to the direction and position of Feature Descriptor.It is and corresponding each by calculating each second feature description
Then Euclidean distance between a fisrt feature description obtains nearest with the Euclidean distance of second feature description
Three or more fisrt feature description.
Step S303, it is special according to nearest three or more each fisrt feature description of Euclidean distance and each second
Euclidean distance relationship between sign description subcharacter description, determines first of user's face feature and default facial characteristics
With degree.
Wherein, three or more each fisrt feature description nearest according to Euclidean distance and each second spy
Euclidean distance relationship between sign description subcharacter description, determines first of user's face feature and default facial characteristics
With degree, including:
Each fisrt feature description describes the distance between son to corresponding each second feature and respectively is first
Euclidean distance is the second Euclidean distance and N Euclidean distances, N >=3;
If the difference between the first Euclidean distance, the second Euclidean distance and N Euclidean distances is located in preset range,
Pass through
Pmatch1=DEuclid1+DEuclid2+…+DEuclidN
Calculate the first matching degree Pmatch1;Wherein, DEuclid1For the first Euclidean distance, DEuclid2For the second Euclidean away from
From DEuclidNFor N Euclidean distances.
In addition, can also be calculated by the above method for the second matching degree and third matching degree, details are not described herein.
It is described to be matched user vocal feature with default phonetic feature to obtain referring to Fig. 4 in one embodiment
Four matching degrees, including:
Step S401 pre-processes the voice of input, and to obtain the efficient voice in voice, the voice includes
Training voice and voice to be identified.
In the present embodiment, different speaker models is established by inputting the enough training voices of quantity, the instruction
Practice the tagged speech sample that voice is known speaker identity, the parameter for adjusting speaker model enables the model base
In supervised learning, reach required recognition performance in practical applications.
When need to judge certain section of voice be it is described in which of several people or for confirm certain section of voice whether be
Described in specified someone when, this section of voice is voice to be identified.Training voice is different from the effect of voice to be identified,
It can be similar and different voice data.When the two is identical, the voice to be identified can be used to examine what is finally obtained to say
The performance for talking about people's model, tests whether it can accurately identify the speaker's identity of voice to be identified.
The voice is pre-processed, to reduce the background noise level in every section of continuous speech signal, output contains
The efficient voice of actual analysis meaning provides the training set of high s/n ratio for the training of follow-up speaker model, improves model instruction
Experienced speed reaches more accurate model training effect.
Referring to Fig. 5, step S401 can be realized by following procedure:
Step S501 carries out exacerbation processing to the high-frequency signal in K voice of input respectively by high-pass filter.
In the present embodiment, in order to reduce the influence of lip radiation, the formant of prominent high frequency respectively believes every voice
Number the high frequency section in voice is aggravated by a high-pass filter, the frequency spectrum of voice signal is made to become smoother.
Step S502 selects the sampled point of preset quantity, to aggravating that treated that every voice divides by described
Frame, and using each frame voice signal after framing as integrand, do product with preset window function, obtain short-term stationarity signal.
In the present embodiment, which can be Hanning window.
Step S503 chooses in the corresponding short-time rating spectrum profile of short-term stationarity signal and is more than first threshold in short-term
Energy decision threshold value, and slightly sentence for the first time;The terminal of efficient voice signal is located at the short-time energy decision threshold
Except time interval corresponding to value and short-time energy envelope intersection point.
Step S504 chooses the short-time energy decision threshold less than second threshold according to the average energy of ambient noise
Value, start-stop of two points that voice short-time energy envelope intersects with the short-time energy decision threshold value as efficient voice signal
Above-mentioned efficient voice is extracted and is exported, as the efficient voice in voice by point.
Step S402 extracts the mel-frequency cepstrum coefficient acoustic feature of efficient voice in the trained voice, output packet
The fisrt feature matrix of the framing number of dimension and every trained voice containing the mel-frequency cepstrum coefficient.
Wherein, the Meier Mel frequencies put forward based on human hearing characteristic are with Hz frequencies at nonlinear correspondence relation, profit
With the nonlinear relationship, Hz spectrum signatures are calculated.
Hz frequencies and the conversion formula of Mel frequencies are:Fmel=2595*lg (1+fHZ/700)
Step S403, the long recurrent neural networks model in short-term of structure, and will be neural described in the fisrt feature Input matrix
Network model, to obtain the output parameter of the neural network model.
Step S404, it is special using the output parameter of the neural network model and the corresponding speaker of the trained voice
Sign, is respectively trained the N number of feature extraction matrix for obtaining N items training voice, and each feature extraction matrix corresponds to described in one
The speaker model of training voice.
Step S405 extracts the mel-frequency cepstrum coefficient acoustic feature of efficient voice in the voice to be identified, output
Include the second characteristic matrix of the framing number of the dimension of the mel-frequency cepstrum coefficient and the voice to be identified.
Step S406, in N number of speaker model, according to preset similarity measurements quantity algorithm, select with it is described
The speaker model that second characteristic matrix matches, and the second matching degree is determined according to the speaker model selected.Wherein,
The K and N is the integer more than zero, and K is more than N.
Wherein, similarity measurements quantity algorithm is used including but not limited to distance measure, Similar measure and match measure scheduling algorithm
To weigh the second characteristic matrix and the speaker model close degree formal in feature objective characterisation.
As an alternative embodiment of the invention, can also be obtained by the cosine measure method in Similar measure algorithm
The speaker model to match with the second characteristic matrix.
Step S104, according to the first matching degree, the second matching degree, third matching degree and the 4th matching degree and the first system
Number, the second coefficient, third coefficient and the 4th coefficient determine the final matching degree of the user, and according to the final matching degree with
User identity is identified in the relationship of preset matching degree.
Wherein, preset matching degree can be configured according to actual conditions, not be limited this.For example, described final
When matching degree is spent more than preset matching, it is believed that user identity identification passes through;It is less than or equal to default in the final matching degree
When with spending, user identity identification does not pass through.
It is described according to the first matching degree, the second matching degree, third matching degree and the 4th matching degree in one embodiment, with
And first coefficient, the second coefficient, third coefficient and the 4th coefficient determine the final matching degree of the user, including:
The final matching degree is:
Pmatch=Pmatch1*coefficient1+Pmatch2*coefficient2+Pmatch3*coefficient 3+
Pmatch4* wherein, coefficient1 is the first coefficient to coefficient 4, and coefficient2 is the second coefficient,
Coefficient 3 is third coefficient, and coefficient 4 is the 4th coefficient, Pmatch1For the first matching degree, Pmatch2It is
Two matching degrees, Pmatch3For third matching degree, Pmatch4For the 4th matching degree, and
Coefficient1+coefficient2+coefficient3+coefficient4=1.
It is described according to the first matching degree, the second matching degree, third matching degree and the 4th matching degree in another embodiment,
And first coefficient, the second coefficient, third coefficient and the 4th coefficient determine the final matching degree of the user, including:
The final matching degree is:
Wherein, coefficient1 is the first coefficient, and coefficient2 is the second coefficient, and coefficient 3 is the
Three coefficients, coefficient 4 are the 4th coefficient, Pmatch1For the first matching degree, Pmatch2For the second matching degree, Pmatch3It is
Three matching degrees, Pmatch4For the 4th matching degree, and
Coefficient1+coefficient2+coefficient3+coefficient4=1.
In addition, the above method can also include:According to the history match degree of each user, the first coefficient, the second system are determined
Magnitude relationship between number, third coefficient and the 4th coefficient;Wherein, the first coefficient, the second coefficient, third coefficient and the 4th system
Several initial values is identical;
The history match degree of each user of basis, determines the first coefficient, the second coefficient, third coefficient and the 4th system
Magnitude relationship between number, including:
If in the history match degree of some user, the first matching degree is more than appointing in the second matching degree to the 4th matching degree
One value then tunes up the first coefficient and turns the second coefficient down to the 4th coefficient;
Wherein, first matching degree is specially more than any value in the second matching degree to the 4th matching degree:Described
The average value of one matching degree is more than any value in the average value to the average value of the 4th matching degree of the second matching degree.
Wherein, the initial value of the first coefficient, the second coefficient, third coefficient and the 4th coefficient is set as identical, further according to
The first coefficient of magnitude relationship pair, the second coefficient, third coefficient and the 4th coefficient of each matching degree later are adjusted, will
The corresponding coefficient of the higher feature of matching degree tunes up, so as to further increase accuracy of identification.
The embodiment of the present invention acquires the face figure of user after being verified to username and password input by user
Then piece, iris picture, fingerprint picture and voice are extracted from facial picture, iris picture, fingerprint picture and voice respectively
Characterize user's face feature, client iris feature, user fingerprints feature and the user vocal feature of User Identity;User
Facial characteristics is matched to obtain the first matching degree with default facial characteristics, by client iris feature and default iris feature into
Row matching obtains the second matching degree, and user fingerprints feature is matched with preset fingerprint feature to obtain third matching degree, will be used
Family phonetic feature is matched to obtain the 4th matching degree with default phonetic feature, finally according to the first matching degree, the second matching
Degree, third matching degree and the 4th matching degree and the first coefficient, the second coefficient, third coefficient and the 4th coefficient determine the user
Final matching degree, and user identity is identified according to the final matching degree and the relationship of preset matching degree, can
User name password and various biological characteristics are combined, user identity is verified, to improve to subscriber authentication
Accuracy.
It should be understood that the size of the serial number of each step is not meant that the order of the execution order in above-described embodiment, each process
Execution sequence should be determined by its function and internal logic, the implementation process without coping with the embodiment of the present invention constitutes any limit
It is fixed.
Corresponding to the mobile terminal token activation method described in foregoing embodiments, Fig. 6 shows that the embodiment of the present invention provides
User identity identification device schematic diagram, the device be applied to client.For convenience of description, it illustrates only and this implementation
The relevant part of example.
With reference to Fig. 6, which includes obtaining authentication module 601, extraction module 602, matching module 603 and identification module
604。
Authentication module 601 is obtained, for obtaining username and password input by user and being verified, defeated to user
After the username and password entered is verified, facial picture, iris picture, fingerprint picture and the voice of user are acquired;
Extraction module 602, for extracting characterization from the facial picture, iris picture, fingerprint picture and voice respectively
User's face feature, client iris feature, user fingerprints feature and the user vocal feature of User Identity;Wherein, user
Facial characteristics, client iris feature, user fingerprints feature and user vocal feature correspond to the first coefficient, the second coefficient, the respectively
Three coefficients and the 4th coefficient;
Matching module 603 obtains the first matching degree for being matched user's face feature with default facial characteristics,
Client iris feature is matched with default iris feature to obtain the second matching degree, by user fingerprints feature and preset fingerprint
Feature is matched to obtain third matching degree, and user vocal feature is matched with default phonetic feature to obtain the 4th matching
Degree;
Identification module 604, for according to the first matching degree, the second matching degree, third matching degree and the 4th matching degree, with
And first coefficient, the second coefficient, third coefficient and the 4th coefficient determine the final matching degree of the user, and according to described final
User identity is identified in matching degree and the relationship of preset matching degree.
Optionally, the user's face feature from the facial picture extraction characterization User Identity, including:
User's face feature, the scale space images of structure characterization user's face feature are obtained, and it is empty to detect the scale
Between characteristic point in image;
Each characteristic point in the scale space is filtered and is positioned, obtains and meets the stable special of preset condition
Sign point;
To each invariant feature point setting direction, the Feature Descriptor of user's face feature is generated;
It is described to match user's face feature with default facial characteristics, including:
Obtain second feature description for presetting facial characteristics;
Obtain three or more in the user's face feature nearest with the Euclidean distance of second feature description
Fisrt feature description son;
It is special according to nearest three or more each fisrt feature description of Euclidean distance and each second feature description
Euclidean distance relationship between sign description, determines the first matching degree of user's face feature and default facial characteristics.
Optionally, described son to be described according to each fisrt feature and each second feature describes between subcharacter description
Euclidean distance relationship, determine the first matching degree of user's face feature and default facial characteristics, including:
Each fisrt feature description describes the distance between son to corresponding each second feature and respectively is first
Euclidean distance is the second Euclidean distance and N Euclidean distances, N >=3;
If the difference between the first Euclidean distance, the second Euclidean distance and N Euclidean distances is located in preset range,
Pass through
Pmatch1=DEuclid1+DEuclid2+…+DEuclidN
Calculate the first matching degree Pmatch1;Wherein, DEuclid1For the first Euclidean distance, DEuclid2For the second Euclidean away from
From DEuclidNFor N Euclidean distances.
Optionally, described to be matched user vocal feature with default phonetic feature to obtain the 4th matching degree
The voice of input is pre-processed, to obtain the efficient voice in voice, the voice include training voice and
Voice to be identified;
The mel-frequency cepstrum coefficient acoustic feature of efficient voice in the trained voice is extracted, output includes the plum
The fisrt feature matrix of the dimension of your frequency cepstral coefficient and the framing number of every trained voice;
The long recurrent neural networks model in short-term of structure, and by neural network model described in the fisrt feature Input matrix,
To obtain the output parameter of the neural network model;
Using the output parameter and the corresponding speaker characteristic of the trained voice of the neural network model, instruct respectively
N number of feature extraction matrix of N items training voice is got out, each feature extraction matrix corresponds to a trained voice
Speaker model;
The mel-frequency cepstrum coefficient acoustic feature of efficient voice in the voice to be identified is extracted, output is comprising described
The second characteristic matrix of the framing number of the dimension of mel-frequency cepstrum coefficient and the voice to be identified;
In N number of speaker model, according to preset similarity measurements quantity algorithm, select and the second feature
The speaker model that matrix matches, and the second matching degree is determined according to the speaker model selected;
Wherein, the K and N is the integer more than zero, and K is more than N.
Optionally, the voice of described pair of input pre-processes, to obtain the efficient voice in voice, including:
Exacerbation processing is carried out to the high-frequency signal in K voice of input respectively by high-pass filter;
The sampled point for selecting preset quantity, to carrying out framing by aggravate that treated every voice, and by framing
Each frame voice signal afterwards does product with preset window function, obtains short-term stationarity signal as integrand;
The short-time energy decision gate more than first threshold is chosen in the corresponding short-time rating spectrum profile of short-term stationarity signal
Limit value, and slightly sentence for the first time;The terminal of efficient voice signal is located at the short-time energy decision threshold value and in short-term can
It measures except the time interval corresponding to envelope intersection point;
According to the average energy of ambient noise, the short-time energy decision threshold value less than second threshold is chosen, voice is in short-term
Terminal of two points that energy envelope intersects with the short-time energy decision threshold value as efficient voice signal, has above-mentioned
Effect voice is extracted and is exported, as the efficient voice in voice.
It is described according to the first matching degree, the second matching degree, third matching degree and the 4th as a kind of embodiment
The final matching degree of the user is determined with degree and the first coefficient, the second coefficient, third coefficient and the 4th coefficient, including:
The final matching degree is:
Pmatch=Pmatch1*coefficient1+Pmatch2*coefficient2+Pmatch3*coefficient 3+
Pmatch4* wherein, coefficient1 is the first coefficient to coefficient 4, and coefficient2 is the second coefficient,
Coefficient 3 is third coefficient, and coefficient 4 is the 4th coefficient, and
Coefficient1+coefficient2+coefficient3+coefficient4=1.
It can implement as another kind, it is described according to the first matching degree, the second matching degree, third matching degree and the 4th matching
Degree and the first coefficient, the second coefficient, third coefficient and the 4th coefficient determine the final matching degree of the user, including:
The final matching degree is:
Wherein, coefficient1 is the first coefficient, and coefficient2 is the second coefficient, and coefficient 3 is the
Three coefficients, coefficient 4 are the 4th coefficient, and
Coefficient1+coefficient2+coefficient3+coefficient4=1.
Optionally, above-mentioned user identity identification device can also include:
Coefficient adjustment module determines the first coefficient, the second coefficient, third for the history match degree according to each user
Magnitude relationship between coefficient and the 4th coefficient;Wherein, the first coefficient, the second coefficient, third coefficient and the 4th coefficient is initial
It is worth identical;
The history match degree of each user of basis, determines the first coefficient, the second coefficient, third coefficient and the 4th system
Magnitude relationship between number, including:
If in the history match degree of some user, the first matching degree is more than appointing in the second matching degree to the 4th matching degree
One value then tunes up the first coefficient and turns the second coefficient down to the 4th coefficient;
Wherein, first matching degree is specially more than any value in the second matching degree to the 4th matching degree:Described
The average value of one matching degree is more than any value in the average value to the average value of the 4th matching degree of the second matching degree.
Fig. 7 is the schematic diagram for the terminal device that one embodiment of the invention provides.As shown in fig. 7, the terminal of the embodiment is set
Standby 700 include:It processor 710, memory 720 and is stored in the memory 720 and can be on the processor 710
The computer program 721 of operation, such as program.The processor 710 is realized above-mentioned each when executing the computer program 721
Step in a embodiment of the method, such as step 101 shown in FIG. 1 is to 104.Alternatively, the processor 710 executes the meter
The function of each module in above-mentioned each device embodiment, such as the work(of module 601 to 604 shown in Fig. 6 are realized when calculation machine program 721
Energy.
Illustratively, the computer program 721 can be divided into one or more module/units, it is one or
Multiple module/the units of person are stored in the memory 720, and are executed by the processor 710, to complete the present invention.Institute
It can be the series of computation machine program instruction section that can complete specific function, the instruction segment to state one or more module/units
For describing implementation procedure of the computer program 721 in the terminal device 700.For example, the computer program
721 can be divided into obtain authentication module, extraction module, matching module and identification module, each module concrete function it is as follows:
Authentication module is obtained, for obtaining username and password input by user and being verified, to input by user
After username and password is verified, facial picture, iris picture, fingerprint picture and the voice of user are acquired;
Extraction module, for extraction characterization to be used from the facial picture, iris picture, fingerprint picture and voice respectively
User's face feature, client iris feature, user fingerprints feature and the user vocal feature of family identity;Wherein, user plane
Portion's feature, client iris feature, user fingerprints feature and user vocal feature correspond to the first coefficient, the second coefficient, third respectively
Coefficient and the 4th coefficient;
Matching module obtains the first matching degree for being matched user's face feature with default facial characteristics, will use
Family iris feature is matched to obtain the second matching degree with default iris feature, by user fingerprints feature and preset fingerprint feature
It is matched to obtain third matching degree, user vocal feature is matched with default phonetic feature to obtain the 4th matching degree;
Identification module, for according to the first matching degree, the second matching degree, third matching degree and the 4th matching degree, Yi Ji
One coefficient, the second coefficient, third coefficient and the 4th coefficient determine the final matching degree of the user, and according to the final matching
It spends and user identity is identified with the relationship of preset matching degree.
The terminal device 700 can be the computing devices such as desktop PC, notebook, palm PC and mobile phone.Institute
It states terminal device may include, but is not limited only to, processor 710, memory 720.It will be understood by those skilled in the art that Fig. 7 is only
Only it is the example of terminal device 700, does not constitute the restriction to terminal device 700, may include more more or fewer than illustrating
Component, either combines certain components or different components, for example, the terminal device can also include input-output equipment,
Network access equipment, bus, display etc..
Alleged processor 90 can be central processing unit (Central Processing Unit, CPU), can also be
Other general processors, digital signal processor (Digital Signal Processor, DSP), application-specific integrated circuit
(Application Specific Integrated Circuit, ASIC), ready-made programmable gate array (Field-
Programmable Gate Array, FPGA) either other programmable logic device, discrete gate or transistor logic device
Part, discrete hardware components etc..General processor can be microprocessor or the processor can also be any conventional processing
Device etc..
The memory 91 can be the internal storage unit of the terminal device 9, for example, the hard disk of terminal device 9 or
Memory.The memory 91 can also be to be equipped on the External memory equipment of the terminal device 9, such as the terminal device 9
Plug-in type hard disk, intelligent memory card (Smart Media Card, SMC), secure digital (Secure Digital, SD) card,
Flash card (Flash Card) etc..Further, the memory 91 can also both include that the inside of the terminal device 9 is deposited
Storage unit also includes External memory equipment.The memory 91 is for storing the computer program and the terminal device
Other required programs and data.The memory 91, which can be also used for temporarily storing, have been exported or will export
Data.
It is apparent to those skilled in the art that for convenience of description and succinctly, only with above-mentioned each work(
Can unit, module division progress for example, in practical application, can be as needed and by above-mentioned function distribution by difference
Functional unit, module complete, i.e., the internal structure of described device is divided into different functional units or module, with complete
All or part of function described above.Each functional unit, module in embodiment can be integrated in a processing unit
In, can also be that each unit physically exists alone, can also during two or more units are integrated in one unit, on
The form realization that hardware had both may be used in integrated unit is stated, can also be realized in the form of SFU software functional unit.In addition,
Each functional unit, module specific name also only to facilitate mutually distinguish, the protection model being not intended to limit this application
It encloses.The specific work process of unit in above system, module, can refer to corresponding processes in the foregoing method embodiment,
This is repeated no more.
In the above-described embodiments, it all emphasizes particularly on different fields to the description of each embodiment, is not described in detail or remembers in some embodiment
The part of load may refer to the associated description of other embodiments.
Those of ordinary skill in the art may realize that described in conjunction with the examples disclosed in the embodiments of the present disclosure
Unit and algorithm steps can be realized with the combination of electronic hardware or computer software and electronic hardware.These functions
It is implemented in hardware or software actually, depends on the specific application and design constraint of technical solution.Professional technique
Personnel can use different methods to achieve the described function each specific application, but this realization should not be recognized
It is beyond the scope of this invention.
In embodiment provided by the present invention, it should be understood that disclosed device/terminal device and method, it can be with
It realizes by another way.For example, device described above/terminal device embodiment is only schematical, for example,
The division of the module or unit, only a kind of division of logic function, formula that in actual implementation, there may be another division manner,
Such as multiple units or component can be combined or can be integrated into another system, or some features can be ignored, or do not hold
Row.Another point, shown or discussed mutual coupling or direct-coupling or communication connection can be connect by some
Mouthful, the INDIRECT COUPLING or communication connection of device or unit can be electrical, machinery or other forms.
The unit illustrated as separating component may or may not be physically separated, as unit
The component of display may or may not be physical unit, you can be located at a place, or may be distributed over more
In a network element.Some or all of unit therein can be selected according to the actual needs to realize this embodiment scheme
Purpose.
In addition, each functional unit in each embodiment of the present invention can be integrated in a processing unit, it can also
It is that each unit physically exists alone, it can also be during two or more units be integrated in one unit.Above-mentioned integrated list
The form that hardware had both may be used in member is realized, can also be realized in the form of SFU software functional unit.
If the integrated module/unit is realized in the form of SFU software functional unit and is sold as independent product
Or it in use, can be stored in a computer read/write memory medium.Based on this understanding, the present invention realizes above-mentioned
All or part of flow in embodiment method can also instruct relevant hardware to complete by computer program, described
Computer program can be stored in a computer readable storage medium, which, can be real when being executed by processor
The step of existing above-mentioned each embodiment of the method.Wherein, the computer program includes computer program code, the computer
Program code can be source code form, object identification code form, executable file or certain intermediate forms etc..The computer
Readable medium may include:Any entity or device, recording medium, USB flash disk, the shifting of the computer program code can be carried
Dynamic hard disk, magnetic disc, CD, computer storage, read-only memory (Read-Only Memory, ROM), random access memory
Device (Random Access Memory, RAM), electric carrier signal, telecommunication signal and software distribution medium etc..It needs to illustrate
, content that the computer-readable medium includes can according to legislation in jurisdiction and the requirement of patent practice into
Row increase and decrease appropriate, such as in certain jurisdictions, according to legislation and patent practice, computer-readable medium does not include being
Electric carrier signal and telecommunication signal.
Embodiment described above is merely illustrative of the technical solution of the present invention, rather than its limitations;Although with reference to aforementioned reality
Applying example, invention is explained in detail, it will be understood by those of ordinary skill in the art that:It still can be to aforementioned each
Technical solution recorded in embodiment is modified or equivalent replacement of some of the technical features;And these are changed
Or replace, the spirit and scope for various embodiments of the present invention technical solution that it does not separate the essence of the corresponding technical solution should all
It is included within protection scope of the present invention.
Claims (10)
1. a kind of method for identifying ID, which is characterized in that including:
It obtains username and password input by user and is verified, is verified to username and password input by user
Afterwards, facial picture, iris picture, fingerprint picture and the voice of user are acquired;
The user plane of characterization User Identity is extracted from the facial picture, iris picture, fingerprint picture and voice respectively
Portion's feature, client iris feature, user fingerprints feature and user vocal feature;Wherein, user's face feature, client iris are special
Sign, user fingerprints feature and user vocal feature correspond to the first coefficient, the second coefficient, third coefficient and the 4th coefficient respectively;
User's face feature is matched with default facial characteristics to obtain the first matching degree, by client iris feature and default rainbow
Film feature is matched to obtain the second matching degree, and user fingerprints feature is matched with preset fingerprint feature to obtain third matching
Degree is matched with default phonetic feature user vocal feature to obtain the 4th matching degree;
According to the first matching degree, the second matching degree, third matching degree and the 4th matching degree and the first coefficient, the second coefficient,
Three coefficients and the 4th coefficient determine the final matching degree of the user, and according to the relationship of the final matching degree and preset matching degree
User identity is identified.
2. method for identifying ID as described in claim 1, which is characterized in that described from the facial picture extraction characterization
The user's face feature of User Identity, including:
User's face feature, the scale space images of structure characterization user's face feature are obtained, and detect the scale space figure
Characteristic point as in;
Each characteristic point in the scale space is filtered and is positioned, the invariant feature point for meeting preset condition is obtained;
To each invariant feature point setting direction, the Feature Descriptor of user's face feature is generated;
It is described to be matched user's face feature with default facial characteristics to obtain the first matching degree, including:
Obtain second feature description for presetting facial characteristics;
Obtain the of three or more in the user's face feature nearest with the Euclidean distance of second feature description
One Feature Descriptor;
Subcharacter is described according to nearest three or more each fisrt feature description of Euclidean distance with each second feature to retouch
The Euclidean distance relationship between son is stated, determines the first matching degree of user's face feature and default facial characteristics.
3. method for identifying ID as claimed in claim 2, which is characterized in that described according to nearest three of Euclidean distance
Above each fisrt feature description and each second feature describe the Euclidean distance relationship between subcharacter description, determine
First matching degree of user's face feature and default facial characteristics, including:
Each fisrt feature description to corresponding each second feature describes the distance between son, and respectively to be first European
Distance is the second Euclidean distance and N Euclidean distances, N >=3;
If the difference between the first Euclidean distance, the second Euclidean distance and N Euclidean distances is located in preset range, pass through
Pmatch1=DEuclid1+DEuclid2+…+DEuclidN
Calculate the first matching degree Pmatch1;Wherein, DEuclid1For the first Euclidean distance, DEuclid2For the second Euclidean distance,
DEuclidNFor N Euclidean distances.
4. method for identifying ID as described in claim 1, which is characterized in that described by user vocal feature and default language
Sound feature is matched to obtain the 4th matching degree, including:
The voice of input is pre-processed, to obtain the efficient voice in voice, the voice includes training voice and waits knowing
Other voice;
The mel-frequency cepstrum coefficient acoustic feature of efficient voice in the trained voice is extracted, output includes the mel-frequency
The fisrt feature matrix of the framing number of the dimension of cepstrum coefficient and every trained voice;
The long recurrent neural networks model in short-term of structure, and by neural network model described in the fisrt feature Input matrix, to obtain
Take the output parameter of the neural network model;
Using the output parameter and the corresponding speaker characteristic of the trained voice of the neural network model, it is respectively trained and obtains
N items train N number of feature extraction matrix of voice, each feature extraction matrix to correspond to the speaker of a trained voice
Model;
The mel-frequency cepstrum coefficient acoustic feature of efficient voice in the voice to be identified is extracted, output includes Meier frequency
The second characteristic matrix of the framing number of the dimension of rate cepstrum coefficient and the voice to be identified;
In N number of speaker model, according to preset similarity measurements quantity algorithm, select and the second characteristic matrix phase
Matched speaker model, and the second matching degree is determined according to the speaker model selected;
Wherein, the K and N is the integer more than zero, and K is more than N.
5. method for identifying ID as claimed in claim 4, which is characterized in that the voice of described pair of input is located in advance
Reason, to obtain the efficient voice in voice, including:
Exacerbation processing is carried out to the high-frequency signal in K voice of input respectively by high-pass filter;
The sampled point for selecting preset quantity, to aggravating that treated by described every voice carrying out framing, and will be after framing
Each frame voice signal does product as integrand, with preset window function, obtains short-term stationarity signal;
The short-time energy decision threshold value more than first threshold is chosen in the corresponding short-time rating spectrum profile of short-term stationarity signal,
And for the first time slightly sentence;The terminal of efficient voice signal is located at the short-time energy decision threshold value and short-time energy envelope
Except time interval corresponding to intersection point;
According to the average energy of ambient noise, the short-time energy decision threshold value less than second threshold, voice short-time energy are chosen
Terminal of two points that envelope intersects with the short-time energy decision threshold value as efficient voice signal, by above-mentioned efficient voice
It extracts and exports, as the efficient voice in voice.
6. such as method for identifying ID described in any one of claim 1 to 5, which is characterized in that described according to the first matching
Degree, the second matching degree, third matching degree and the 4th matching degree and the first coefficient, the second coefficient, third coefficient and the 4th coefficient
Determine the final matching degree of the user, including:
The final matching degree is:
Pmatch=Pmatch1*coefficient1+Pmatch2*coefficient2+Pmatch3*coefficient 3+Pmatch4*
Wherein, coefficient1 is the first coefficient to coefficient 4, and coefficient2 is the second coefficient, coefficient 3
For third coefficient, coefficient 4 is the 4th coefficient, Pmatch1For the first matching degree, Pmatch2For the second matching degree, Pmatch3
For third matching degree, Pmatch4For the 4th matching degree, and
Coefficient1+coefficient2+coefficient3+coefficient4=1.
7. such as method for identifying ID described in any one of claim 1 to 5, which is characterized in that described according to the first matching
Degree, the second matching degree, third matching degree and the 4th matching degree and the first coefficient, the second coefficient, third coefficient and the 4th coefficient
Determine the final matching degree of the user, including:
The final matching degree is:
Wherein, coefficient1 is the first coefficient, and coefficient2 is the second coefficient, and coefficient 3 is third system
Number, coefficient 4 are the 4th coefficient, Pmatch1For the first matching degree, Pmatch2For the second matching degree, Pmatch3For third
With degree, Pmatch4For the 4th matching degree, and
Coefficient1+coefficient2+coefficient3+coefficient4=1.
8. method for identifying ID as described in claim 1, which is characterized in that further include:
According to the history match degree of each user, determine big between the first coefficient, the second coefficient, third coefficient and the 4th coefficient
Small relationship;Wherein, the initial value of the first coefficient, the second coefficient, third coefficient and the 4th coefficient is identical;
The history match degree of each user of basis, determines between the first coefficient, the second coefficient, third coefficient and the 4th coefficient
Magnitude relationship, including:
If in the history match degree of some user, the first matching degree is more than any value in the second matching degree to the 4th matching degree,
It then tunes up the first coefficient and turns the second coefficient down to the 4th coefficient;
Wherein, first matching degree is specially more than any value in the second matching degree to the 4th matching degree:Described first
Average value with degree is more than any value in the average value to the average value of the 4th matching degree of the second matching degree.
9. a kind of user identity identification device, which is characterized in that including:
Authentication module is obtained, for obtaining username and password input by user and being verified, to user input by user
After name and password authentification pass through, facial picture, iris picture, fingerprint picture and the voice of user are acquired;
Characteristic extracting module, for the extraction characterization user from the facial picture, iris picture, fingerprint picture and voice respectively
User's face feature, client iris feature, user fingerprints feature and the user vocal feature of identity;Wherein, user's face
Feature, client iris feature, user fingerprints feature and user vocal feature correspond to the first coefficient, the second coefficient, third system respectively
Number and the 4th coefficient;
Matching module obtains the first matching degree, by user's rainbow for being matched user's face feature with default facial characteristics
Film feature is matched to obtain the second matching degree with default iris feature, by user fingerprints feature and the progress of preset fingerprint feature
With third matching degree is obtained, user vocal feature is matched with default phonetic feature to obtain the 4th matching degree;
Identification module, for according to the first matching degree, the second matching degree, third matching degree and the 4th matching degree and the first system
Number, the second coefficient, third coefficient and the 4th coefficient determine the final matching degree of the user, and according to the final matching degree and in advance
If user identity is identified in the relationship of matching degree.
10. a kind of terminal device, including memory, processor and it is stored in the memory and can be on the processor
The computer program of operation, which is characterized in that the processor realizes such as claim 1 to 8 when executing the computer program
The step of any one the method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810639690.3A CN108776795A (en) | 2018-06-20 | 2018-06-20 | Method for identifying ID, device and terminal device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810639690.3A CN108776795A (en) | 2018-06-20 | 2018-06-20 | Method for identifying ID, device and terminal device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108776795A true CN108776795A (en) | 2018-11-09 |
Family
ID=64025223
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810639690.3A Pending CN108776795A (en) | 2018-06-20 | 2018-06-20 | Method for identifying ID, device and terminal device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108776795A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109598223A (en) * | 2018-11-26 | 2019-04-09 | 北京洛必达科技有限公司 | Method and apparatus based on video acquisition target person |
CN109682676A (en) * | 2018-12-29 | 2019-04-26 | 上海工程技术大学 | A kind of feature extracting method of the acoustic emission signal of fiber tension failure |
CN110349312A (en) * | 2019-07-09 | 2019-10-18 | 江苏万贝科技有限公司 | A kind of intelligent peephole voice reminder identifying system and its method based on household |
CN110600041A (en) * | 2019-07-29 | 2019-12-20 | 华为技术有限公司 | Voiceprint recognition method and device |
CN110619201A (en) * | 2019-08-01 | 2019-12-27 | 努比亚技术有限公司 | Terminal control method, terminal and computer readable storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104598795A (en) * | 2015-01-30 | 2015-05-06 | 科大讯飞股份有限公司 | Authentication method and system |
CN107610707A (en) * | 2016-12-15 | 2018-01-19 | 平安科技(深圳)有限公司 | A kind of method for recognizing sound-groove and device |
CN107680597A (en) * | 2017-10-23 | 2018-02-09 | 平安科技(深圳)有限公司 | Audio recognition method, device, equipment and computer-readable recording medium |
CN107688824A (en) * | 2017-07-27 | 2018-02-13 | 平安科技(深圳)有限公司 | Picture match method and terminal device |
-
2018
- 2018-06-20 CN CN201810639690.3A patent/CN108776795A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104598795A (en) * | 2015-01-30 | 2015-05-06 | 科大讯飞股份有限公司 | Authentication method and system |
CN107610707A (en) * | 2016-12-15 | 2018-01-19 | 平安科技(深圳)有限公司 | A kind of method for recognizing sound-groove and device |
CN107688824A (en) * | 2017-07-27 | 2018-02-13 | 平安科技(深圳)有限公司 | Picture match method and terminal device |
CN107680597A (en) * | 2017-10-23 | 2018-02-09 | 平安科技(深圳)有限公司 | Audio recognition method, device, equipment and computer-readable recording medium |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109598223A (en) * | 2018-11-26 | 2019-04-09 | 北京洛必达科技有限公司 | Method and apparatus based on video acquisition target person |
CN109682676A (en) * | 2018-12-29 | 2019-04-26 | 上海工程技术大学 | A kind of feature extracting method of the acoustic emission signal of fiber tension failure |
CN110349312A (en) * | 2019-07-09 | 2019-10-18 | 江苏万贝科技有限公司 | A kind of intelligent peephole voice reminder identifying system and its method based on household |
CN110349312B (en) * | 2019-07-09 | 2021-09-17 | 江苏万贝科技有限公司 | Household-based intelligent cat eye voice reminding and recognition system and method |
CN110600041A (en) * | 2019-07-29 | 2019-12-20 | 华为技术有限公司 | Voiceprint recognition method and device |
CN110600041B (en) * | 2019-07-29 | 2022-04-29 | 华为技术有限公司 | Voiceprint recognition method and device |
CN110619201A (en) * | 2019-08-01 | 2019-12-27 | 努比亚技术有限公司 | Terminal control method, terminal and computer readable storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108776795A (en) | Method for identifying ID, device and terminal device | |
CN109166586B (en) | Speaker identification method and terminal | |
TWI641965B (en) | Method and system of authentication based on voiceprint recognition | |
CN107610707B (en) | A kind of method for recognizing sound-groove and device | |
CN101894548B (en) | Modeling method and modeling device for language identification | |
CN108281146B (en) | Short voice speaker identification method and device | |
CN110956966B (en) | Voiceprint authentication method, voiceprint authentication device, voiceprint authentication medium and electronic equipment | |
CN113223536B (en) | Voiceprint recognition method and device and terminal equipment | |
WO2019136912A1 (en) | Electronic device, identity authentication method and system, and storage medium | |
WO2019237519A1 (en) | General vector training method, voice clustering method, apparatus, device and medium | |
CN110265035B (en) | Speaker recognition method based on deep learning | |
CN110929836B (en) | Neural network training and image processing method and device, electronic equipment and medium | |
WO2019237518A1 (en) | Model library establishment method, voice recognition method and apparatus, and device and medium | |
WO2019232826A1 (en) | I-vector extraction method, speaker recognition method and apparatus, device, and medium | |
WO2022141868A1 (en) | Method and apparatus for extracting speech features, terminal, and storage medium | |
CN106971724A (en) | A kind of anti-tampering method for recognizing sound-groove and system | |
CN110136726A (en) | A kind of estimation method, device, system and the storage medium of voice gender | |
CN111667839A (en) | Registration method and apparatus, speaker recognition method and apparatus | |
CN108880815A (en) | Auth method, device and system | |
JP6750048B2 (en) | Method and apparatus for speech recognition | |
Cuzzocrea et al. | Dempster-Shafer-based fusion of multi-modal biometrics for supporting identity verification effectively and efficiently | |
CN110226201A (en) | The voice recognition indicated using the period | |
CN112309404B (en) | Machine voice authentication method, device, equipment and storage medium | |
CN115472179A (en) | Automatic detection method and system for digital audio deletion and insertion tampering operation | |
CN111524524B (en) | Voiceprint recognition method, voiceprint recognition device, voiceprint recognition equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20181109 |
|
RJ01 | Rejection of invention patent application after publication |