CN109726538B - Mobile intelligent terminal for voiceprint recognition unlocking and method thereof - Google Patents

Mobile intelligent terminal for voiceprint recognition unlocking and method thereof Download PDF

Info

Publication number
CN109726538B
CN109726538B CN201910026378.1A CN201910026378A CN109726538B CN 109726538 B CN109726538 B CN 109726538B CN 201910026378 A CN201910026378 A CN 201910026378A CN 109726538 B CN109726538 B CN 109726538B
Authority
CN
China
Prior art keywords
voice
user
data
model
processor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910026378.1A
Other languages
Chinese (zh)
Other versions
CN109726538A (en
Inventor
李庆湧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Tongchuang Technology Co ltd
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201910026378.1A priority Critical patent/CN109726538B/en
Publication of CN109726538A publication Critical patent/CN109726538A/en
Application granted granted Critical
Publication of CN109726538B publication Critical patent/CN109726538B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The utility model provides a mobile intelligent terminal of voiceprint recognition unblock which characterized in that: the intelligent terminal comprises a main processor, a secondary processor, a computing module, a primary bus, a secondary bus, a voice module, a touch screen, a FLASH memory and an SDRAM memory, wherein the voice input module, the touch screen, the FLASH memory and the SDRAM memory perform data interaction with the main processor and the computing module through the secondary bus, the secondary processor performs data interaction with the primary bus, and a buffer is connected with the primary bus and the secondary bus.

Description

Mobile intelligent terminal for voiceprint recognition unlocking and method thereof
Technical Field
The invention belongs to the field of mobile intelligent terminals, and particularly relates to a mobile intelligent terminal for voiceprint recognition unlocking and a method thereof.
Background
The lock screen is used as a first door for equipment safety, is unlocked by using a digital or graphic password, is easy to snoop or violently crack, and has lower safety. In addition, to improve the security of the device, the biometric identification technology is also gradually applied to screen locking authentication of the mobile device, such as fingerprint identification and face identification. The fingerprint identification has high success rate and high identification speed, but needs a special fingerprint hardware extraction module, and has higher cost. The face recognition extracts the face features by means of the camera, the cost is low, but the face recognition is greatly influenced by external factors, the recognition success rate is low, and the recognition algorithm is very complex.
Voiceprints are biometric identification technology, as well as fingerprint and iris identification which is well known to people. Compared with other biological identification technologies, the voiceprint identification technology has the advantages of being easy to obtain data, low in cost, capable of being controlled remotely and the like, and therefore the voiceprint identification technology has special advantages which are not possessed by other biological identification technologies under a plurality of special environments.
However, the current voiceprint recognition has the defects of low operation speed, long recognition time and low recognition rate, and the intelligent terminal cannot be unlocked quickly.
Disclosure of Invention
The invention provides a mobile intelligent terminal for voiceprint recognition unlocking and a method thereof, aiming at solving the technical problem of how to accurately and quickly unlock the intelligent terminal through voiceprints.
The technical scheme of the invention is as follows: the utility model provides a mobile intelligent terminal of voiceprint recognition unblock which characterized in that: the intelligent terminal comprises a main processor, a secondary processor, a computing module, a primary bus, a secondary bus, a voice module, a touch screen, a FLASH memory and an SDRAM memory, wherein the voice input module, the touch screen, the FLASH memory and the SDRAM memory perform data interaction with the main processor and the computing module through the secondary bus, the secondary processor performs data interaction with the primary bus, and a buffer is connected with the primary bus and the secondary bus;
the voice module is used for acquiring and converting voice and comprises a microphone, an A/D converter, an encoder and a clock generator, wherein the clock generator generates clock frequency so as to control the sampling rate of the A/D converter, and the encoder provides acquired audio data to the master processor and the slave processor for processing;
the processor is used for data segmentation and feature extraction, after an end point is detected, silent sections among voice sections are removed, effective voice sections are obtained until enough voice data are collected, and information representing user features is obtained from the processor through a feature extraction algorithm;
the main processor is used for generating a voiceprint model;
and the calculation module is used for voiceprint recognition, realizing mode matching after completing voice feature extraction, and judging whether the tester is a legal user or not according to preset parameters.
A method for unlocking a mobile intelligent terminal by voiceprint recognition comprises the following steps:
step 1, initializing an intelligent terminal system;
step 2, the touch screen displays the selection of the working mode, and the working mode comprises the following steps: a training mode, an identification mode and a management mode, if the training mode is selected, entering the step A, if the identification mode is selected, entering the step B, if the management mode is selected, entering the step C, and if the training mode, the identification mode and the management mode are not selected, entering the step 3 and ending;
step A, the training mode comprises the following steps:
step A1, displaying and prompting a user to input a password to enter a training mode, if the password is correct, entering step A2, otherwise entering step 3 and ending;
step A2, prompting the user to speak a specific voice (such as a user name) through a voice module;
step A3, a voice module is started to collect user voice;
step A4, carrying out data segmentation from a processor, extracting effective voice segments, if the extraction fails, entering step A3, otherwise entering step A5;
a5, extracting features from a processor through an algorithm;
step A6, the main processor generates a comparison model and stores the comparison model in a FLASH memory;
step A7, prompting the user whether to continue training, if yes, entering step A2, otherwise entering step 2;
step B, the identification mode comprises the following steps:
step B1, displaying and prompting the user to speak a specific voice (such as a user name) through the voice module by the touch screen;
step B2, starting a voice module and collecting the voice of the user;
b3, segmenting data from the processor, extracting voice segments, entering step B2 if the extraction fails, and entering step B4 if the extraction fails;
step B4, extracting features from the processor through an algorithm;
step B5, the main processor generates a user model;
step B6, the calculation module compares the user model with the comparison model, if the comparison result meets the threshold value, the touch screen displays the successful recognition, and the step B7 is entered, otherwise, the touch screen displays the failure of the recognition, and prompts the user whether to input the voice again, if so, the step B1 is entered, otherwise, the step 3 is entered and the end is ended;
b7, unlocking a screen, and entering system application;
step C, the management mode comprises the following steps:
step C1, the touch screen displays and prompts the user to input the password to enter a management mode or register a new user, if the input password is correct, the step C2 is performed after the password is clicked and confirmed, otherwise, the step 3 is performed, and if the new user is selected to be registered, the user can add user information;
step C2, the touch screen displays the current user information and the comparison model, and the user can modify the user information and delete the comparison model;
wherein, the comparison model can be a plurality of.
Step C3, after the modification is finished, confirming by the user, and entering the step 2;
and 3, finishing.
The invention has the beneficial effects that:
(1) the dual-processor and secondary bus structure is adopted, so that the voice data are processed simultaneously, and the processing speed is improved;
(2) providing a plurality of working modes for a user to select, realizing the management of user information and a voiceprint model, and having a complete safety password verification process;
(3) the method has detailed algorithms on data segmentation, feature extraction, model generation and model comparison, and ensures the rapidness and accuracy of voiceprint recognition.
Drawings
FIG. 1 is a block diagram showing the configuration of a voiceprint recognition portion of an intelligent terminal according to the present invention;
FIG. 2 is a flowchart of voiceprint recognition in accordance with the present invention.
Detailed Description
The invention will be further described with reference to the accompanying drawings.
A mobile intelligent terminal for voiceprint recognition unlocking comprises a main processor, a secondary processor, a computing module, a primary bus, a secondary bus, a voice module, a touch screen, a FLASH memory and an SDRAM memory, wherein the voice input module, the touch screen, the FLASH memory and the SDRAM memory carry out data interaction with the main processor and the computing module through the secondary bus, the secondary processor carries out data interaction with the primary bus, a buffer is connected with the primary bus and the secondary bus,
the voice module is used for collecting and converting voice and comprises a microphone, an A/D converter, an encoder and a clock generator, the clock generator generates clock frequency so as to control the sampling rate of the A/D converter, the encoder provides collected audio data to a main processor and a slave processor for processing,
the FLASH memory is used for storing user information, a voiceprint model and a software program;
the buffer is used for realizing alternate reading and writing of data, and comprises two RAMs (random access memory) 1 and an RAM2, the storage areas are equal in size and are used as storage equipment of data obtained after serial-parallel conversion, the slave processor is enabled to read the two areas for storing voice data in a time-sharing manner, the voice data acquired at the first time is stored into the RAM1, the slave processor is enabled to read the data in the RAM1, meanwhile, the voice data acquisition at the second time is started and stored into the RAM2, after the slave processor finishes reading the data in the RAM1, the voice data in the RAM2 are also stored, the slave processor reads the data in the RAM2, meanwhile, the voice data acquisition at the third time is started and stored into the RAM1, and the alternate processing process is repeated repeatedly until the voice data acquisition is finished;
the method is a two-stage pipeline data processing mode, and the mode is obviously beneficial to the efficient implementation of a system algorithm.
The processor is used for data segmentation and feature extraction, after an end point is detected, silent sections among voice sections are removed, effective voice sections are obtained until enough voice data are collected, and information representing user features is obtained from the processor through a feature extraction algorithm;
the main processor is used for generating the voiceprint model, the step is the step which takes the most time in the system flow, and the performance of the voiceprint model determines the performance of the recognition system, so the generation of the voiceprint model is the most important in the whole system;
the calculation module is used for voiceprint recognition, realizing mode matching after completing voice feature extraction, and judging whether a tester is a legal user or not according to preset parameters;
the touch screen is used for operating user information, operating a voiceprint model and displaying an identification result, the operating of the user information comprises registering, deleting and modifying, the operating of the voiceprint model comprises writing and deleting, and the displaying of the identification result means that a conclusion whether the user is a legal user is provided for the user in the form of images and characters.
A method for unlocking a mobile intelligent terminal by voiceprint recognition comprises the following steps:
step 1, initializing an intelligent terminal system;
step 2, the touch screen displays the selection of the working mode, and the working mode comprises the following steps: a training mode, an identification mode and a management mode, if the training mode is selected, entering the step A, if the identification mode is selected, entering the step B, if the management mode is selected, entering the step C, and if the training mode, the identification mode and the management mode are not selected, entering the step 3 and ending;
step A, the training mode comprises the following steps:
step A1, displaying and prompting a user to input a password to enter a training mode, if the password is correct, entering step A2, otherwise entering step 3 and ending;
step A2, prompting the user to speak a specific voice (such as a user name) through a voice module;
step A3, a voice module is started to collect user voice;
step A4, carrying out data segmentation from a processor, extracting effective voice segments, if the extraction fails, entering step A3, otherwise entering step A5;
a5, extracting features from a processor through an algorithm;
step A6, the main processor generates a comparison model and stores the comparison model in a FLASH memory;
step A7, prompting the user whether to continue training, if yes, entering step A2, otherwise entering step 2;
step B, the identification mode comprises the following steps:
step B1, displaying and prompting the user to speak a specific voice (such as a user name) through the voice module by the touch screen;
step B2, starting a voice module and collecting the voice of the user;
b3, segmenting data from the processor, extracting voice segments, entering step B2 if the extraction fails, and entering step B4 if the extraction fails;
step B4, extracting features from the processor through an algorithm;
step B5, the main processor generates a user model;
step B6, the calculation module compares the user model with the comparison model, if the comparison result meets the threshold value, the touch screen displays the successful recognition, and the step B7 is entered, otherwise, the touch screen displays the failure of the recognition, and prompts the user whether to input the voice again, if so, the step B1 is entered, otherwise, the step 3 is entered and the end is ended;
b7, unlocking a screen, and entering system application;
step C, the management mode comprises the following steps:
step C1, the touch screen displays and prompts the user to input the password to enter a management mode or register a new user, if the input password is correct, the step C2 is performed after the password is clicked and confirmed, otherwise, the step 3 is performed, and if the new user is selected to be registered, the user can add user information;
step C2, the touch screen displays the current user information and the comparison model, and the user can modify the user information and delete the comparison model;
wherein, the comparison model can be a plurality of.
Step C3, after the modification is finished, confirming by the user, and entering the step 2;
and 3, finishing.
Wherein the data segmentation comprises the following steps:
d1, dividing the voice data into several sections according to the sampling period, and calculating the short-time energy and zero-crossing rate of each section of voice data;
d2, dividing the voice data into a mute section, a transition section, a voice section and an end section according to the short-time energy, the short-time zero-crossing rate, the low threshold value and the high threshold value, when the short-time energy or the zero-crossing rate of a certain section of voice data is greater than the set low threshold value, the transition section is ended and the voice section is entered from the voice data starting point to the voice data section, when the short-time energy or the zero-crossing rate of a certain section of voice data exceeds the high threshold value, the transition section is ended and the voice section is entered, and when the short-time energy or the zero-crossing rate of a certain section of voice data is lower than the set low threshold value, the voice section is ended and the end section is entered;
d3, extracting voice sections.
Wherein the short-time energy EnComprises the following steps: en=x2(n) h (n), wherein x (n) is a voice data signal, h (n) is a filter large unit impulse response function, and n is a discrete time.
The zero crossing rate ZnComprises the following steps:
Figure GDA0002663230380000061
wherein, sgn [ [ alpha ]/[ alpha ] ]]In order to be a function of the sign,
Figure GDA0002663230380000062
w () is a hamming window function and m is a discrete time.
The feature extraction through the algorithm comprises the following steps:
e1, framing the voice data;
e2, windowing voice data according to frames;
e3, fast Fourier transform to obtain the frequency spectrum parameters of the voice data;
e4, calculating a modulus;
e5, carrying out Merr filtering to steepen the square spectrum of the frequency spectrum parameters, and carrying out logarithmic operation;
e6, using discrete cosine transform to obtain mel cepstrum coefficient, and using it as the characteristic parameter of voice data.
Wherein the period of use is 20.
The process of generating the comparison model or the user model is as follows:
f1, generating an initial population, setting the genetic algebra as 0, and generating the initial population according to the characteristic parameters of the voice data;
f2, selecting operators, namely selecting 10% of the most excellent individuals from the individuals with good adaptability as operation objects without cross mutation, namely reserving the 10% of the most excellent individuals, and reserving the rest 90% of the individuals without cross mutation according to the respective selection probabilities;
f3, performing crossover operation, namely obtaining a first point arbitrarily according to the calculation result of the crossover probability after determining the distance between the two bodies, and obtaining a second point according to the fixed distance;
f4, mutation operation, namely, completing the mutation operation on all genetic genes contained in the individual according to the mutation probability of the individual;
f5, calculating and evaluating the fitness of individuals in the group, and calculating the fitness of a new individual obtained after coding;
f6, whether an exit condition is met, wherein the exit condition comprises: (a) the genetic algebra reaches a saturation state, namely a preset maximum value; (b) if none of the two exit conditions is satisfied, adding 1 to the genetic algebra, and entering step F7;
f7, if K clustering is not executed, the step F8 is executed, otherwise, the step F2 is executed;
f8, performing K clustering, performing one-time clustering operation on all individuals in the group by using the characteristic of a K mean algorithm, classifying the individuals according to the principle of proximity after the clustering center is determined, completing the coding operation on the individuals by using a corresponding coding technology, realizing the updating of chromosomes, and entering the step F2.
The process of comparing the user model with the comparison model is as follows:
g1, calculating Euclidean distances between frames of the comparison model and the user model to obtain a matching distance matrix of the frames, and establishing a grid, wherein the vertical axis is frame data of the comparison model and is Y (1-M), the horizontal axis is frame data of the user model and is X (1-N), grid intersections are (T (X), R (Y)) to represent intersections of the comparison model and the user model, and M, N is the total number of frames of the comparison model and the user model;
g2, finding out the best path in the matching distance matrix, wherein the points (T (X-1), R (Y)), (T (X-1), R (Y-1)), (T (X), R (Y-1)) are the previous lattice points of the points (T (X), R (Y)), and the path accumulation distance corresponding to the points (T (X), R (Y)) is:
D(T(X),R(Y))=d(T(X),R(Y))+min{D(T(X-1),R(Y)),D(T(X-1),R(Y-1)),D(T(X),R(Y-1))}。
where D (T (X), R (Y)) is the distance to the point (1,1), D (1,1) ═ 0, a search is started from the point (1,1), recursion is repeated to the point (N, M), the corresponding matching distance is D (N, M), and the minimum matching distance D is obtainedMIN(N, M); wherein, the comparison result meeting the threshold value refers to the minimum matching distance DMIN(N, M) is less than the threshold.
The above-described embodiment merely represents one embodiment of the present invention, but is not to be construed as limiting the scope of the present invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention.

Claims (9)

1. The utility model provides a mobile intelligent terminal of voiceprint recognition unblock which characterized in that: the intelligent terminal comprises a main processor, a secondary processor, a computing module, a primary bus, a secondary bus, a voice module, a touch screen, a FLASH memory and an SDRAM memory, wherein the voice input module, the touch screen, the FLASH memory and the SDRAM memory perform data interaction with the main processor and the computing module through the secondary bus, the secondary processor performs data interaction with the primary bus, and a buffer is connected with the primary bus and the secondary bus;
the voice module is used for acquiring and converting voice and comprises a microphone, an A/D converter, an encoder and a clock generator, wherein the clock generator generates clock frequency so as to control the sampling rate of the A/D converter, and the encoder provides acquired audio data to the master processor and the slave processor for processing;
the buffer is used for realizing alternate reading and writing of data, and comprises two RAMs (random access memory) 1 and an RAM2, the storage areas are equal in size and are used as storage equipment of data obtained after serial-parallel conversion, the slave processor is enabled to read the two areas for storing voice data in a time-sharing manner, the voice data acquired at the first time is stored into the RAM1, the slave processor is enabled to read the data in the RAM1, meanwhile, the voice data acquisition at the second time is started and stored into the RAM2, after the slave processor finishes reading the data in the RAM1, the voice data in the RAM2 are also stored, the slave processor reads the data in the RAM2, meanwhile, the voice data acquisition at the third time is started and stored into the RAM1, and the alternate processing process is repeated repeatedly until the voice data acquisition is finished;
the processor is used for data segmentation and feature extraction, after an end point is detected, silent sections among voice sections are removed, effective voice sections are obtained until enough voice data are collected, and information representing user features is obtained from the processor through a feature extraction algorithm;
the main processor is used for generating a voiceprint model;
and the calculation module is used for comparing the generated voiceprint model with the comparison model, if the comparison result meets a threshold value, the touch screen displays that the identification is successful, otherwise, the touch screen displays that the identification is failed.
2. The mobile intelligent terminal unlocked through voiceprint recognition according to claim 1, wherein the mobile intelligent terminal is characterized in that: the FLASH memory is used for storing user information, a voiceprint model and a software program; the touch screen is used for operating user information, operating a voiceprint model and displaying an identification result, the operating of the user information comprises registering, deleting and modifying, the operating of the voiceprint model comprises writing and deleting, and the displaying of the identification result means that a conclusion whether the user is a legal user is provided for the user in the form of images and characters.
3. A method for unlocking a mobile intelligent terminal according to any one of claims 1-2 by voiceprint recognition, which is characterized by comprising the following steps:
step 1, initializing an intelligent terminal system;
step 2, the touch screen displays the selection of the working mode, and the working mode comprises the following steps: a training mode, an identification mode and a management mode, if the training mode is selected, entering the step A, if the identification mode is selected, entering the step B, if the management mode is selected, entering the step C, and if the training mode, the identification mode and the management mode are not selected, entering the step 3 and ending;
step A, the training mode comprises the following steps:
step A1, displaying and prompting a user to input a password to enter a training mode, if the password is correct, entering step A2, otherwise entering step 3 and ending;
step A2, prompting a user to speak a specific voice through a voice module;
step A3, a voice module is started to collect user voice;
step A4, carrying out data segmentation from a processor, extracting effective voice segments, if the extraction fails, entering step A3, otherwise entering step A5;
a5, extracting features from a processor through an algorithm;
step A6, the main processor generates a comparison model and stores the comparison model in a FLASH memory;
step A7, prompting the user whether to continue training, if yes, entering step A2, otherwise entering step 2;
step B, the identification mode comprises the following steps:
step B1, displaying and prompting the user to speak out a specific voice through the voice module by the touch screen;
step B2, starting a voice module and collecting the voice of the user;
b3, segmenting data from the processor, extracting voice segments, entering step B2 if the extraction fails, and entering step B4 if the extraction fails;
step B4, extracting features from the processor through an algorithm;
step B5, the main processor generates a user model;
step B6, the calculation module compares the user model with the comparison model, if the comparison result meets the threshold value, the touch screen displays the successful recognition, and the step B7 is entered, otherwise, the touch screen displays the failure of the recognition, and prompts the user whether to input the voice again, if so, the step B1 is entered, otherwise, the step 3 is entered and the end is ended;
b7, unlocking a screen, and entering system application;
step C, the management mode comprises the following steps:
step C1, the touch screen displays and prompts the user to input the password to enter a management mode or register a new user, if the input password is correct, the step C2 is performed after the password is clicked and confirmed, otherwise, the step 3 is performed, and if the new user is selected to be registered, the user can add user information;
step C2, the touch screen displays the current user information and the comparison model, and the user can modify the user information and delete the comparison model;
step C3, after the modification is finished, confirming by the user, and entering the step 2;
and 3, finishing.
4. The method according to claim 3, wherein said data segmentation comprises the steps of:
d1, dividing the voice data into several sections according to the sampling period, and calculating the short-time energy and zero-crossing rate of each section of voice data;
d2, dividing the voice data into a mute section, a transition section, a voice section and an end section according to the short-time energy, the short-time zero-crossing rate, the low threshold value and the high threshold value, when the short-time energy or the zero-crossing rate of a certain section of voice data is greater than the set low threshold value, the transition section is ended and the voice section is entered from the voice data starting point to the voice data section, when the short-time energy or the zero-crossing rate of a certain section of voice data exceeds the high threshold value, the transition section is ended and the voice section is entered, and when the short-time energy or the zero-crossing rate of a certain section of voice data is lower than the set low threshold value, the voice section is ended and the end section is entered;
d3, extracting voice sections.
5. The method of claim 4, wherein: said short-time energy EnComprises the following steps: en=x2(n)*h(n), wherein x (n) is a voice data signal, h (n) is a filter large unit impulse response function, and n is discrete time;
the zero crossing rate ZnComprises the following steps:
Figure FDA0002663230370000031
wherein, sgn [ [ alpha ]/[ alpha ] ]]In order to be a function of the sign,
Figure FDA0002663230370000032
w () is a hamming window function and m is a discrete time.
6. The method according to claim 3, characterized in that said extracting features by algorithm comprises the steps of:
e1, framing the voice data;
e2, windowing voice data according to frames;
e3, fast Fourier transform to obtain the frequency spectrum parameters of the voice data;
e4, calculating a modulus;
e5, carrying out Merr filtering to steepen the square spectrum of the frequency spectrum parameters, and carrying out logarithmic operation;
e6, using discrete cosine transform to obtain mel cepstrum coefficient, and using it as the characteristic parameter of voice data.
7. The method of claim 3, wherein the generating of the comparison model or the user model is as follows:
f1, generating an initial population, setting the genetic algebra as 0, and generating the initial population according to the characteristic parameters of the voice data;
f2, selecting operators, namely selecting 10% of the most excellent individuals from the individuals with good adaptability as operation objects without cross mutation, namely reserving the 10% of the most excellent individuals, and reserving the rest 90% of the individuals without cross mutation according to the respective selection probabilities;
f3, performing crossover operation, namely obtaining a first point arbitrarily according to the calculation result of the crossover probability after determining the distance between the two bodies, and obtaining a second point according to the fixed distance;
f4, mutation operation, namely, completing the mutation operation on all genetic genes contained in the individual according to the mutation probability of the individual;
f5, calculating and evaluating the fitness of individuals in the group, and calculating the fitness of a new individual obtained after coding;
f6, whether an exit condition is met, wherein the exit condition comprises: (a) the genetic algebra reaches a saturation state, namely a preset maximum value; (b) if none of the two exit conditions is satisfied, adding 1 to the genetic algebra, and entering step F7;
f7, if K clustering is not executed, the step F8 is executed, otherwise, the step F2 is executed;
f8, performing K clustering, performing one-time clustering operation on all individuals in the group by using the characteristic of a K mean algorithm, classifying the individuals according to the principle of proximity after the clustering center is determined, completing the coding operation on the individuals by using a corresponding coding technology, realizing the updating of chromosomes, and entering the step F2.
8. The method of claim 3, wherein the comparing the user model and the comparison model comprises:
g1, calculating Euclidean distances between frames of the comparison model and the user model to obtain a matching distance matrix of the frames, and establishing a grid, wherein the vertical axis is frame data of the comparison model and is Y (1-M), the horizontal axis is frame data of the user model and is X (1-N), grid intersections are (T (X), R (Y)) to represent intersections of the comparison model and the user model, and M, N is the total number of frames of the comparison model and the user model;
g2, finding out the best path in the matching distance matrix, wherein the points (T (X-1), R (Y)), (T (X-1), R (Y-1)), (T (X), R (Y-1)) are the previous lattice points of the points (T (X), R (Y)), and the path accumulation distance corresponding to the points (T (X), R (Y)) is:
D(T(X),R(Y))=d(T(X),R(Y))+min{D(T(X-1),R(Y)),D(T(X-1),R(Y-1)),D(T(X),R(Y-1))};
in the formula, D (T (X), R (Y)) is a distance to a point (1,1), D (1, 1)' is 0, a search is started from the point (1,1), and recursion is repeated to a point (N, M), and a corresponding matching distance is D (N, M), and a minimum matching distance D is obtainedMIN(N,M)。
9. The method of claim 8, wherein: the comparison result meeting the threshold value refers to the minimum matching distance DMIN(N, M) is less than the threshold.
CN201910026378.1A 2019-01-11 2019-01-11 Mobile intelligent terminal for voiceprint recognition unlocking and method thereof Active CN109726538B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910026378.1A CN109726538B (en) 2019-01-11 2019-01-11 Mobile intelligent terminal for voiceprint recognition unlocking and method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910026378.1A CN109726538B (en) 2019-01-11 2019-01-11 Mobile intelligent terminal for voiceprint recognition unlocking and method thereof

Publications (2)

Publication Number Publication Date
CN109726538A CN109726538A (en) 2019-05-07
CN109726538B true CN109726538B (en) 2020-12-29

Family

ID=66298973

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910026378.1A Active CN109726538B (en) 2019-01-11 2019-01-11 Mobile intelligent terminal for voiceprint recognition unlocking and method thereof

Country Status (1)

Country Link
CN (1) CN109726538B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6044427A (en) * 1998-01-29 2000-03-28 Micron Electronics, Inc. Upgradable mobile processor module and method for implementing same
CN102841865A (en) * 2011-06-24 2012-12-26 上海芯豪微电子有限公司 High-performance caching system and method
CN105874530A (en) * 2013-10-30 2016-08-17 格林伊登美国控股有限责任公司 Predicting recognition quality of a phrase in automatic speech recognition systems

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100417858B1 (en) * 2001-07-27 2004-02-05 주식회사 하이닉스반도체 Low power type rambus dram
CN2613818Y (en) * 2003-04-11 2004-04-28 清华大学 Main controller for superconductive energy storage device
CN101241699B (en) * 2008-03-14 2012-07-18 北京交通大学 A speaker identification method for remote Chinese teaching
CN103685185B (en) * 2012-09-14 2018-04-27 上海果壳电子有限公司 Mobile equipment voiceprint registration, the method and system of certification
CN103198605A (en) * 2013-03-11 2013-07-10 成都百威讯科技有限责任公司 Indoor emergent abnormal event alarm system
US9817813B2 (en) * 2014-01-08 2017-11-14 Genesys Telecommunications Laboratories, Inc. Generalized phrases in automatic speech recognition systems

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6044427A (en) * 1998-01-29 2000-03-28 Micron Electronics, Inc. Upgradable mobile processor module and method for implementing same
CN102841865A (en) * 2011-06-24 2012-12-26 上海芯豪微电子有限公司 High-performance caching system and method
CN105874530A (en) * 2013-10-30 2016-08-17 格林伊登美国控股有限责任公司 Predicting recognition quality of a phrase in automatic speech recognition systems

Also Published As

Publication number Publication date
CN109726538A (en) 2019-05-07

Similar Documents

Publication Publication Date Title
EP1704668B1 (en) System and method for providing claimant authentication
Soltane et al. Face and speech based multi-modal biometric authentication
WO2019153404A1 (en) Smart classroom voice control system
CN110570869B (en) Voiceprint recognition method, device, equipment and storage medium
CN1202687A (en) Speaker recognition over large population with fast and detailed matches
WO2007147042A2 (en) Voice-based multimodal speaker authentication using adaptive training and applications thereof
JPS6217240B2 (en)
CN109410956B (en) Object identification method, device, equipment and storage medium of audio data
CN101540170B (en) Voiceprint recognition method based on biomimetic pattern recognition
CN111104852B (en) Face recognition technology based on heuristic Gaussian cloud transformation
CN107481736A (en) A kind of vocal print identification authentication system and its certification and optimization method and system
CN112507311A (en) High-security identity verification method based on multi-mode feature fusion
CN113886792A (en) Application method and system of print control instrument combining voiceprint recognition and face recognition
CN110570870A (en) Text-independent voiceprint recognition method, device and equipment
Dimaunahan et al. MFCC and VQ voice recognition based ATM security for the visually disabled
CN109545226B (en) Voice recognition method, device and computer readable storage medium
CN109726538B (en) Mobile intelligent terminal for voiceprint recognition unlocking and method thereof
Soltane et al. Soft decision level fusion approach to a combined behavioral speech-signature biometrics verification
JP7173379B2 (en) Speaker recognition system and method of use
CN113241081A (en) Far-field speaker authentication method and system based on gradient inversion layer
KR100560425B1 (en) Apparatus for registrating and identifying voice and method thereof
Gupta et al. Speech Recognition Using Correlation Technique
JP2015055835A (en) Speaker recognition device, speaker recognition method, and speaker recognition program
CN116417000A (en) Power grid dispatching identity authentication method and system based on voiceprint recognition
CN114333840A (en) Voice identification method and related device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20220617

Address after: 518000 1609, Changhong science and technology building, No. 18, Keji South 12th Road, high tech Zone, Yuehai street, Nanshan District, Shenzhen, Guangdong

Patentee after: Shenzhen Tongchuang Technology Co.,Ltd.

Address before: 518052 Room 201, building A, No. 1, Qian Wan Road, Qianhai Shenzhen Hong Kong cooperation zone, Shenzhen, Guangdong (Shenzhen Qianhai business secretary Co., Ltd.)

Patentee before: Li Qingyong

TR01 Transfer of patent right