CN114511914A - Face recognition method and device and terminal equipment - Google Patents

Face recognition method and device and terminal equipment Download PDF

Info

Publication number
CN114511914A
CN114511914A CN202210407077.5A CN202210407077A CN114511914A CN 114511914 A CN114511914 A CN 114511914A CN 202210407077 A CN202210407077 A CN 202210407077A CN 114511914 A CN114511914 A CN 114511914A
Authority
CN
China
Prior art keywords
face
feature data
identification
ultrasonic
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210407077.5A
Other languages
Chinese (zh)
Other versions
CN114511914B (en
Inventor
谢俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yihuiyun Intelligent Technology Shenzhen Co ltd
Original Assignee
Yihuiyun Intelligent Technology Shenzhen Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yihuiyun Intelligent Technology Shenzhen Co ltd filed Critical Yihuiyun Intelligent Technology Shenzhen Co ltd
Priority to CN202210407077.5A priority Critical patent/CN114511914B/en
Publication of CN114511914A publication Critical patent/CN114511914A/en
Application granted granted Critical
Publication of CN114511914B publication Critical patent/CN114511914B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The invention discloses a face recognition method, a face recognition device and terminal equipment, wherein the method comprises the following steps: acquiring face image information of a person to be identified; carrying out image feature extraction on the face image information to obtain face image feature data of the figure to be identified; inputting the facial image feature data into an identification degree model to calculate the facial identification degree, and obtaining an identification degree coefficient; if the identification degree coefficient is larger than a preset threshold value, acquiring an ultrasonic signal reflected by the face of the person to be identified; extracting the face ultrasonic characteristic data of the figure to be recognized from the acquired ultrasonic signals; and comparing the face image characteristic data and the face ultrasonic characteristic data with each characteristic data template in a preset characteristic database respectively to obtain the identity identification information of the figure to be identified. The invention avoids invalid operation, reduces operation pressure and improves the accuracy of face recognition.

Description

Face recognition method and device and terminal equipment
Technical Field
The invention relates to the technical field of face recognition, in particular to a face recognition method, a face recognition device and terminal number equipment.
Background
With the maturity of face recognition technology, more and more scenes such as entrance guard or gate pass use face recognition technology to identify the identity information of people who come in and go out. The face recognition technology generally collects face information of a person in a form of collecting face picture information. However, in some specific scenes, such as the interior of a processing factory or a hospital, a person often wears a mask, so that a part of the face of the person is shielded; or when the number of people is large, the face of a person is blocked.
At this time, if the complete face information cannot be acquired for recognition, the face recognition device may report an error or the recognition accuracy may be reduced, and the computational power may be occupied and wasted. And under the influence of outdoor environment, when there is the people face by strong light such as sunshine or light, the people have the problem of overexposure by the people's face information of gathering, also can lead to face recognition device to discern the degree of accuracy to descend.
Disclosure of Invention
The embodiment of the invention aims to provide a face recognition method, a face recognition device and terminal equipment, which can improve the accuracy of face recognition.
In order to solve the above technical problem, in a first aspect, an embodiment of the present invention provides a face recognition method, including:
acquiring face image information of a person to be identified;
carrying out image feature extraction on the face image information to obtain face image feature data of the figure to be identified;
inputting the facial image feature data into an identification degree model to calculate the facial identification degree, and obtaining an identification degree coefficient;
if the identification degree coefficient is larger than a preset threshold value, acquiring an ultrasonic signal reflected by the face of the person to be identified;
extracting the face ultrasonic characteristic data of the figure to be recognized from the acquired ultrasonic signals;
respectively comparing the face image feature data and the face ultrasonic feature data with each feature data template in a preset feature database to obtain the identity identification information of the figure to be identified; the feature data template comprises identity information of a designated user, and image feature data and ultrasonic feature data corresponding to the face of the designated user.
As one of the preferable schemes, the inputting the facial image feature data into a recognition degree model to perform face recognition degree calculation to obtain a recognition degree coefficient specifically includes:
carrying out face region division on the face image feature data through a classifier to obtain a plurality of feature data sets corresponding to the preset face region; the preset human face parts comprise eyes, a forehead, a nose, a mouth, cheeks and a chin;
and inputting the corresponding feature data sets in the preset human face region into a neural network for weighting, and then calculating according to the weighted feature data sets to obtain an identification coefficient.
As one of the preferable schemes, the neural network comprises weight values corresponding to each preset face part area;
inputting the feature data sets corresponding to the preset human face region into a neural network for weighting, and then calculating according to the weighted feature data sets to obtain an identification coefficient, wherein the identification coefficient specifically comprises the following steps:
convolving the feature data collection corresponding to each preset face region with the weight value corresponding to the preset face region through the neural network to obtain the weight-containing feature data corresponding to each preset face region;
and summing all the weighted feature data, and then carrying out normalization processing to generate an identification coefficient.
As one preferable scheme, before inputting the feature data sets corresponding to the preset face region into a neural network for weighting, and then calculating an identification coefficient according to the weighted feature data sets, the method further includes:
acquiring collected historical face image information according to the received identification instruction; the identification instruction comprises a target face part;
extracting feature data of the target face part in the historical face image information to obtain sample data;
inputting the sample data into the neural network for iterative training to increase the weight of the target face part in the neural network and obtain an updated neural network.
As one preferable scheme, after the facial image feature data is input into a recognition degree model to perform face recognition degree calculation, and a recognition degree coefficient is obtained, the method further includes:
and if the identification degree coefficient is smaller than a preset threshold value, acquiring the face image information of the person to be identified again.
As one preferable scheme, the face image information is visible light image information or infrared image information.
As one preferred scheme, the comparing the face image feature data and the face ultrasonic feature data with each feature data template in a preset feature database respectively to obtain the identification information of the person to be identified specifically includes:
clustering the facial image feature data and the facial ultrasonic feature data with each feature data template in a preset feature database respectively through a clustering method, and determining designated users corresponding to clustering centers of the facial image feature data and the facial ultrasonic feature data as target users;
and outputting the identity information of the target user.
As one preferred scheme, after clustering the face image feature data and the face ultrasonic feature data with each feature data template in a preset feature database by a clustering method, the method further comprises:
and if the clustering centers of the face image characteristic data and the face ultrasonic characteristic data do not have corresponding designated users, outputting a result of failed recognition.
In a second aspect, an embodiment of the present invention further provides a face recognition apparatus, including:
the face image information acquisition module is used for acquiring face image information of a person to be identified;
the first feature extraction module is used for extracting image features of the face image information to obtain face image feature data of the person to be identified;
the coefficient calculation module is used for inputting the facial image feature data into an identification degree model to calculate the facial identifiability so as to obtain an identification degree coefficient;
the ultrasonic signal acquisition module is used for acquiring an ultrasonic signal reflected by the face of the person to be identified if the identification degree coefficient is greater than a preset threshold value;
the second feature extraction module is used for extracting the face ultrasonic feature data of the figure to be recognized from the acquired ultrasonic signals;
the identification module is used for respectively comparing the face image characteristic data and the face ultrasonic characteristic data with each characteristic data template in a preset characteristic database to obtain the identity identification information of the person to be identified; the feature data template comprises identity information of a designated user, and image feature data and ultrasonic feature data corresponding to the face of the designated user.
Yet another embodiment of the present invention provides a terminal device, which includes a processor, a memory, and a computer program stored in the memory and configured to be executed by the processor, and the processor implements the method according to the first aspect when executing the computer program.
Yet another embodiment of the present invention provides a computer-readable storage medium comprising a stored computer program, wherein the computer program, when executed, controls an apparatus in which the computer-readable storage medium is located to perform the method according to the first aspect.
Compared with the prior art, the embodiment of the invention has the beneficial effects that at least one point in the following is realized:
the face recognition method of the invention collects the face image information of the figure to be recognized; carrying out image feature extraction on the face image information to obtain face image feature data of the figure to be identified; inputting the facial image feature data into an identification degree model to calculate the facial identification degree, and obtaining an identification degree coefficient; if the identification degree coefficient is larger than a preset threshold value, acquiring an ultrasonic signal reflected by the face of the person to be identified; the face image characteristic data is input into the recognition degree model in advance to calculate the face recognition degree, whether the face shielding degree of the figure to be recognized influences the face recognition or not is obtained, if the recognition degree coefficient is larger than a preset threshold value, the face shielding degree does not influence the face recognition, the face recognition can be continued, invalid operation is avoided, and the operation pressure is reduced; extracting face ultrasonic characteristic data of the person to be recognized from the acquired ultrasonic signals; respectively comparing the face image characteristic data and the face ultrasonic characteristic data with each characteristic data template in a preset characteristic database to obtain the identification information of the figure to be identified; the feature data template comprises identity information of a designated user, and image feature data and ultrasonic feature data corresponding to the face of the designated user. The face image information is combined with the acquired face ultrasonic signals, and the face information missing due to problems such as overexposure or shielding of the acquired face image information is supplemented, so that the face recognition process can be smoothly and accurately carried out, and the accuracy of face recognition is improved. Correspondingly, the invention also provides a face recognition device and equipment.
Drawings
In order to more clearly illustrate the technical solution of the present invention, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a schematic flow chart of an embodiment of a face recognition method of the present invention;
FIG. 2 is a schematic structural diagram of an embodiment of a face recognition apparatus according to the present invention;
fig. 3 is a schematic structural diagram of an embodiment of the terminal device of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the description of the present invention, the terms "first", "second", "third", etc. are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implying any number of technical features indicated. Thus, features defined as "first," "second," "third," etc. may explicitly or implicitly include one or more of the features. In the description of the present invention, "a plurality" means two or more unless otherwise specified.
In the description of the present invention, it should be noted that, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
In the description of the present invention, it is to be noted that, unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terms used in the specification of the present invention are for the purpose of describing specific embodiments only, and are not intended to limit the present invention, and those skilled in the art can understand the specific meanings of the above terms in the present invention in a specific case.
Example one
An embodiment of the present invention provides a face recognition method, please refer to fig. 1, and fig. 1 is a flowchart illustrating an embodiment of the face recognition method according to the present invention. The embodiment can be applied to application scenes such as entrance guard or checkpoint needing to check the identity information of the person, the method can be executed by a face recognition device, and the device can be a processor, an intelligent terminal, a tablet or a PC and the like. In this embodiment, the face recognition method may include steps S110 to S160, which are specifically as follows:
s110: acquiring face image information of a person to be identified;
because the face recognition method needs to recognize the face information of the person to be recognized, the face information of the person to be recognized needs to be collected. Specifically, the acquisition of the face information is realized by acquiring a face image of a person to be identified, and the acquired face image is the face image information. Optionally, the face of the person to be recognized may be photographed by a camera of the face recognition device or an external camera connected to the face recognition device, so as to complete the acquisition of the face image information of the person to be recognized.
As one preferable scheme, the face image information is visible light image information or infrared image information. Because the external environment sometimes can not provide sufficient illumination to satisfy the condition that the camera shot clear face image, can adopt infrared camera to realize that illumination is not enough or clear face image under the night condition shoots. Therefore, the acquired face image information may include visible light image information or infrared image information.
S120: carrying out image feature extraction on the face image information to obtain face image feature data of the figure to be identified;
the information carried by each pixel in the collected face image information is the face feature information of the person to be identified, and the face image feature data containing the face feature information of the person to be identified can be obtained by extracting the image features of the face image information. Specifically, the image feature extraction of the face image information can be realized by an image feature extraction method such as a statistical feature recognition method, a geometric feature recognition method or a connection mechanism-based recognition method.
S130: inputting the facial image feature data into an identification degree model to calculate the facial identification degree, and obtaining an identification degree coefficient;
in certain specific scenes, such as the interior of a processing factory or a hospital, people often wear the mask, so that part of the face of a person is shielded; or when the number of people is large, the face of a person is blocked. If the complete face information cannot be acquired for recognition, the face recognition device makes a mistake or the face recognition is inaccurate, and when a wrong face recognition result is obtained, the face recognition device already carries out a complete recognition operation process, so that the operation force is occupied and wasted. In order to avoid invalid operation, face image characteristic data is input into a recognition model which is trained in advance to carry out face recognition degree calculation so as to know whether the face shielding degree of a person to be recognized reflected by the currently collected face image characteristic data influences face recognition; and the recognition coefficient output by the recognition model reflects the face shielding degree of the person to be recognized. If the identification degree coefficient is larger than the preset threshold value, the face identification can be continued only if the face shielding degree does not influence the face identification, so that invalid operation is avoided, and the operation pressure is reduced. Alternatively, the recognition model may be a model composed of several neural network layers.
As one of the preferable schemes, the inputting the facial image feature data into the recognition degree model to perform face recognition degree calculation to obtain the recognition degree coefficient specifically includes steps S210 and S220:
s210, dividing the face region of the face image feature data through a classifier to obtain a plurality of feature data sets corresponding to the preset face region; the preset human face parts comprise eyes, a forehead, a nose, a mouth, cheeks and a chin;
in detail, the feature data of the face image can be divided according to the face region by a pre-trained classifier, so that a feature data set corresponding to each preset face region is obtained. The preset human face parts can be divided by a user in a self-defined way, and the human face can be divided into several conventional human face parts, namely eyes, forehead, nose, mouth, cheeks and chin.
And S220, inputting the corresponding feature data sets in the preset human face region into a neural network for weighting, and then calculating according to the weighted feature data sets to obtain an identification coefficient.
In detail, because the face recognition process generally compares and judges several key face parts with identification degrees in the face and areas nearby, the precondition that whether the collected face image feature data contains the key face parts with identification degrees and whether the number of contained features is enough is smooth and accurate face recognition. The key face regions with identification degree can be five sense organs, cheeks, chin and other regions of the face.
In order to judge whether the collected face image feature data of the person to be recognized has feature data meeting the precondition of face recognition, a neural network with weighted values can be generated according to a plurality of samples formed by historical face image feature data of successful face recognition. The neural network generates a weight value corresponding to each face part region according to the importance degree of each face part in the face recognition process; the larger the weight ratio is, the more the human face region plays a role in the human face recognition process.
And inputting the corresponding feature data sets in the face position areas of a plurality of preset persons into a neural network for empowerment, and acquiring whether the face shielding degree of the current person to be recognized influences face recognition. Specifically, when the face of the person to be recognized is blocked, the collected face image feature data may lack the related feature data of the blocked face part. Because the weights of the feature data of each face part area in the trained neural network are configured according to the rule that the weight value is large when the face recognition effect is large, when the related feature data of the shielded face part, which is missing from the collected face image feature data, has small effect in the face recognition process, the assigned weight value is small, so that the recognition coefficient obtained by calculation according to the weighted feature data set is not influenced by the related feature data of the shielded face part; when the relevance of the relevant feature data of the shielded human face part missing from the collected human face image feature data is large in the human face recognition process, the assigned weight value is also large, so that the influence of the relevant feature data of the shielded human face part on the recognition coefficient obtained by calculation according to the weighted feature data set is reduced. Therefore, according to the magnitude of the calculated identification coefficient, whether the face shielding degree of the current person to be identified influences face identification can be known.
As one of the preferable schemes, the neural network comprises weight values corresponding to each preset face part area; in detail, the neural network includes weight values corresponding to the preset face regions, and the larger the role of the preset face region in the face recognition process is, the larger the corresponding weight value is.
Inputting the feature data sets corresponding to the preset human face region into a neural network for weighting, and then calculating according to the weighted feature data sets to obtain an identification coefficient, wherein the identification coefficient specifically comprises the following steps:
convolving the feature data collection corresponding to each preset face region with the weight value corresponding to the preset face region through the neural network to obtain the weight-containing feature data corresponding to each preset face region;
in detail, the weight value corresponding to each preset human face region included in the neural network may be a convolution kernel set in a plurality of neural network layers. After the feature data sets corresponding to the preset face region are input into a neural network for weighting, the feature data sets corresponding to the preset face region are convolved with the weight values corresponding to the preset face region to obtain the weighted feature data corresponding to the preset face region.
And summing all the weighted feature data, and then carrying out normalization processing to generate an identification coefficient.
In order to facilitate comparison and judgment, normalization processing can be performed after summing all the weighted feature data to generate an identification coefficient.
Due to certain specific scenes, such as the interior of a processing factory or a hospital, people often wear the mask, and people to be identified cannot easily take off the mask for face identification. At this time, the face recognition part needs to be adjusted to adapt to the use scene, and the recognition emphasis of the face recognition device needs to be adjusted to emphasize the upper half face area of the person to be recognized. Specifically, the implementation method for adjusting the recognition emphasis of the face recognition device to emphasize and recognize the upper half face area of the figure to be recognized may be to adjust a weight value corresponding to the upper half face area in the neural network in the recognition degree model.
As one preferable scheme, before inputting the feature data sets corresponding to the preset face region into a neural network for weighting, and then calculating an identification coefficient according to the weighted feature data sets, the method further includes:
acquiring collected historical face image information according to the received identification instruction; the identification instruction comprises a target face part;
in detail, a recognition instruction is generated according to the face part with the weight value adjusted, so that the collected historical face image information is obtained according to the received recognition instruction. Specifically, the recognition instruction includes a target face portion. For example, if the recognition emphasis of the face recognition device needs to be adjusted to emphasize the top half face region of the person to be recognized, the top half face region may be set as the target face portion.
Extracting feature data of the target face part in the historical face image information to obtain sample data;
inputting the sample data into the neural network for iterative training to increase the weight of the target face part in the neural network and obtain an updated neural network.
Sample data related to the target face part is input into the neural network for iterative training, so that the weight of the target face part in the neural network is increased, the updated neural network is obtained, the recognition emphasis of the face recognition device is adjusted to be emphasis to recognize the target face part area of the person to be recognized, and the face recognition device is more flexibly adapted to various different face recognition use scenes.
S140: if the identification degree coefficient is larger than a preset threshold value, acquiring an ultrasonic signal reflected by the face of the person to be identified;
in detail, the preset threshold is calculated by inputting a limit recognizable face image sample into a trained recognition model. If the recognition coefficient is larger than the preset threshold value, the face information contained in the face image feature data is enough and effective to perform face recognition, face recognition can be continuously performed, invalid operation is avoided, and operation pressure is reduced.
The acquired face image information can also have the problems of overexposure or shielding and the like, so that the face information is lost, when the judgment recognition coefficient is larger than the preset threshold value, the ultrasonic signals reflected by the face of the person to be recognized can be acquired, the face information lost due to the problems of overexposure or shielding and the like of the acquired face image information is supplemented, the face recognition process can be smoothly and accurately performed, and the accuracy of face recognition is improved. Specifically, the ultrasonic signal transmitting device connected with the face recognition device can be controlled to transmit an ultrasonic signal to the face of the person to be recognized, and then the ultrasonic signal reflected by the face of the person to be recognized and collected by the ultrasonic signal sensor connected with the face recognition device is received.
As one preferable scheme, after the facial image feature data is input into a recognition degree model to perform face recognition degree calculation, and a recognition degree coefficient is obtained, the method further includes:
and if the identification degree coefficient is smaller than a preset threshold value, acquiring the face image information of the person to be identified again. Specifically, if the recognition coefficient is smaller than the preset threshold, it indicates that the face information included in the face image feature data is too little to perform effective face recognition. Therefore, the face image information of the person to be recognized needs to be collected again until the recognition coefficient is larger than the preset threshold value.
S150: extracting the face ultrasonic characteristic data of the figure to be recognized from the acquired ultrasonic signals;
the information carried by each signal in the acquired ultrasonic signals is the face characteristic information of the person to be identified, and the face ultrasonic characteristic data containing the face characteristic information of the person to be identified can be obtained by extracting the signal characteristics of the ultrasonic signals.
S160: respectively comparing the face image feature data and the face ultrasonic feature data with each feature data template in a preset feature database to obtain the identity identification information of the figure to be identified; the feature data template comprises identity information of a designated user, and image feature data and ultrasonic feature data corresponding to the face of the designated user.
In order to realize the identification of a specific user or a member recorded on a case, the user to be identified can be preset as an appointed user, and the identity information of the appointed user and the image characteristic data and the ultrasonic characteristic data corresponding to the face of the appointed user can be collected. And storing the acquired information corresponding to each designated user in a database for each characteristic data template. And when data comparison is carried out, calling each characteristic data template from the database, and respectively comparing the characteristic data with the acquired face image characteristic data and the face ultrasonic characteristic data to realize the identification information identification of the person to be identified.
As one preferred scheme, the comparing the face image feature data and the face ultrasonic feature data with each feature data template in a preset feature database respectively to obtain the identification information of the person to be identified specifically includes:
clustering the facial image feature data and the facial ultrasonic feature data with each feature data template in a preset feature database respectively through a clustering method, and determining designated users corresponding to clustering centers of the facial image feature data and the facial ultrasonic feature data as target users;
and outputting the identity information of the target user.
As one preferred scheme, after clustering the face image feature data and the face ultrasonic feature data with each feature data template in a preset feature database by a clustering method, the method further comprises:
and if the clustering centers of the face image characteristic data and the face ultrasonic characteristic data do not have corresponding designated users, outputting a result of failed recognition.
In summary, in the face recognition method provided by the embodiment, the face image information of the person to be recognized is acquired; carrying out image feature extraction on the face image information to obtain face image feature data of the figure to be identified; inputting the facial image feature data into a recognition degree model to calculate the facial recognizability to obtain a recognition degree coefficient; if the identification degree coefficient is larger than a preset threshold value, acquiring an ultrasonic signal reflected by the face of the person to be identified; the face image characteristic data is input into the recognition degree model in advance to calculate the face recognition degree, whether the face shielding degree of the figure to be recognized influences the face recognition or not is obtained, if the recognition degree coefficient is larger than a preset threshold value, the face shielding degree does not influence the face recognition, the face recognition can be continued, invalid operation is avoided, and the operation pressure is reduced; extracting the face ultrasonic characteristic data of the figure to be recognized from the acquired ultrasonic signals; respectively comparing the face image feature data and the face ultrasonic feature data with each feature data template in a preset feature database to obtain the identity identification information of the figure to be identified; the feature data template comprises identity information of a designated user, and image feature data and ultrasonic feature data corresponding to the face of the designated user. The face image information is combined with the acquired face ultrasonic signals, and the face information missing due to problems such as overexposure or shielding of the acquired face image information is supplemented, so that the face recognition process can be smoothly and accurately carried out, and the accuracy of face recognition is improved. Correspondingly, the invention also provides a face recognition device and equipment.
Example two
On the basis of the first embodiment, as shown in fig. 2, an embodiment of the present invention further provides a face recognition apparatus 2, including:
the face image information acquisition module 201 is used for acquiring face image information of a person to be identified;
a first feature extraction module 202, configured to perform image feature extraction on the face image information to obtain face image feature data of the person to be identified;
the coefficient calculation module 203 is configured to input the facial image feature data into an identification degree model to perform facial identifiability calculation, so as to obtain an identification degree coefficient;
an ultrasonic signal acquisition module 204, configured to acquire an ultrasonic signal reflected by the face of the person to be recognized if the recognition coefficient is greater than a preset threshold;
the second feature extraction module 205 is configured to extract human face ultrasonic feature data of the person to be recognized from the acquired ultrasonic signal;
the identification module 206 is configured to compare the facial image feature data and the facial ultrasonic feature data with each feature data template in a preset feature database, respectively, to obtain identity identification information of the person to be identified; the feature data template comprises identity information of a designated user, and image feature data and ultrasonic feature data corresponding to the face of the designated user.
As one of the preferable schemes, the coefficient calculating module 203 further includes:
the classification unit is used for carrying out face region division on the face image feature data through a classifier to obtain a plurality of feature data sets corresponding to the preset face region; the preset person, face position comprises eyes, forehead, nose, mouth, cheek and chin;
and the coefficient calculation unit is used for inputting the corresponding feature data sets in the preset human face region into a neural network for weighting, and then calculating according to the weighted feature data sets to obtain an identification coefficient.
As one of the preferable schemes, the neural network comprises weight values corresponding to each preset face part area;
a coefficient calculation unit, specifically configured to:
convolving the feature data collection corresponding to each preset face region with the weight value corresponding to the preset face region through the neural network to obtain the weight-containing feature data corresponding to each preset face region;
and summing all the weighted feature data, and then carrying out normalization processing to generate an identification coefficient.
As one preferable scheme, before inputting the feature data sets corresponding to the preset face region into a neural network for weighting, and then calculating an identification coefficient according to the weighted feature data sets, the method further includes:
acquiring collected historical face image information according to the received identification instruction; the identification instruction comprises a target face part;
extracting feature data of the target face part in the historical face image information to obtain sample data;
inputting the sample data into the neural network for iterative training to increase the weight of the target face part in the neural network and obtain an updated neural network.
As one of preferable schemes, the apparatus further comprises:
and the information re-acquisition module is used for re-acquiring the face image information of the person to be identified if the identification degree coefficient is smaller than a preset threshold value.
As one preferable scheme, the face image information is visible light image information or infrared image information.
As one preferred scheme, the identification module 206 specifically includes:
the clustering unit is used for clustering the facial image characteristic data and the facial ultrasonic characteristic data with each characteristic data template in a preset characteristic database respectively through a clustering method, and determining a designated user corresponding to the clustering centers of the facial image characteristic data and the facial ultrasonic characteristic data as a target user;
and the identity information output unit is used for outputting the identity information of the target user.
As one of the preferable schemes, the identification module 206 further includes:
and the recognition failure result output unit is used for outputting a recognition failure result if the clustering centers of the face image characteristic data and the face ultrasonic characteristic data do not have corresponding designated users.
In summary, the face recognition device provided in this embodiment collects face image information of a person to be recognized; carrying out image feature extraction on the face image information to obtain face image feature data of the figure to be identified; inputting the facial image feature data into a recognition degree model to calculate the facial recognizability to obtain a recognition degree coefficient; if the identification degree coefficient is larger than a preset threshold value, acquiring an ultrasonic signal reflected by the face of the person to be identified; the face image characteristic data is input into the recognition degree model in advance to calculate the face recognition degree, whether the face shielding degree of the figure to be recognized influences the face recognition or not is obtained, if the recognition degree coefficient is larger than a preset threshold value, the face shielding degree does not influence the face recognition, the face recognition can be continued, invalid operation is avoided, and the operation pressure is reduced; extracting the face ultrasonic characteristic data of the figure to be recognized from the acquired ultrasonic signals; respectively comparing the face image feature data and the face ultrasonic feature data with each feature data template in a preset feature database to obtain the identity identification information of the figure to be identified; the feature data template comprises identity information of a designated user, and image feature data and ultrasonic feature data corresponding to the face of the designated user. The face image information is combined with the acquired face ultrasonic signals, and the face information missing due to problems such as overexposure or shielding of the acquired face image information is supplemented, so that the face recognition process can be smoothly and accurately carried out, and the accuracy of face recognition is improved. Correspondingly, the invention also provides a face recognition device and equipment.
EXAMPLE III
Referring to fig. 3, an embodiment of the present invention further provides a terminal device, that is, a computer terminal device, including one or more processors and a memory. A memory coupled to the processor for storing one or more programs; when executed by the one or more processors, cause the one or more processors to implement a face recognition method as in any one of the embodiments described above.
The processor is used for controlling the overall operation of the computer terminal device so as to complete all or part of the steps of the terminal device control device 100 based on the human face recognition. The memory is used to store various types of data to support the operation at the computer terminal device, which data may include, for example, instructions for any application or method operating on the computer terminal device, as well as application-related data. The Memory may be implemented by any type of volatile or non-volatile Memory device or combination thereof, such as Static Random Access Memory (SRAM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Erasable Programmable Read-Only Memory (EPROM), Programmable Read-Only Memory (PROM), Read-Only Memory (ROM), magnetic Memory, flash Memory, magnetic disk, or optical disk.
In an exemplary embodiment, the computer terminal Device may be implemented by one or more Application Specific 1 integrated circuits (AS 1C), a Digital Signal Processor (DSP), a Digital Signal Processing Device (DSPD), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), a controller, a microcontroller, a microprocessor or other electronic components, and is configured to perform the above-mentioned face recognition-based terminal Device control method and achieve technical effects consistent with the above-mentioned methods.
In another exemplary embodiment, a computer readable storage medium including program instructions is further provided, which when executed by a processor, implement the steps of the terminal device control method based on face recognition according to any one of the above embodiments. For example, the computer readable storage medium may be the above-mentioned memory including program instructions, which are executable by the processor of the computer terminal device to implement the above-mentioned terminal device control method based on face recognition, and achieve the technical effects consistent with the above-mentioned method.
While the foregoing is directed to the preferred embodiment of the present invention, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention.

Claims (10)

1. A face recognition method, comprising:
acquiring face image information of a person to be identified;
carrying out image feature extraction on the face image information to obtain face image feature data of the figure to be identified;
inputting the facial image feature data into an identification degree model to calculate the facial identification degree, and obtaining an identification degree coefficient;
if the identification degree coefficient is larger than a preset threshold value, acquiring an ultrasonic signal reflected by the face of the person to be identified;
extracting the face ultrasonic characteristic data of the figure to be recognized from the acquired ultrasonic signals;
respectively comparing the face image feature data and the face ultrasonic feature data with each feature data template in a preset feature database to obtain the identity identification information of the figure to be identified; the feature data template comprises identity information of a designated user, and image feature data and ultrasonic feature data corresponding to the face of the designated user.
2. The method according to claim 1, wherein the inputting the facial image feature data into a recognition degree model for face recognition degree calculation to obtain a recognition degree coefficient specifically comprises:
carrying out face region division on the face image feature data through a classifier to obtain a plurality of feature data sets corresponding to the preset face region; the preset human face parts comprise eyes, a forehead, a nose, a mouth, cheeks and a chin;
and inputting the corresponding feature data sets in the preset human face region into a neural network for weighting, and then calculating according to the weighted feature data sets to obtain an identification coefficient.
3. The face recognition method of claim 2, wherein the neural network comprises weight values corresponding to each of the predetermined face regions;
inputting the feature data sets corresponding to the preset human face region into a neural network for weighting, and then calculating according to the weighted feature data sets to obtain an identification coefficient, wherein the identification coefficient specifically comprises the following steps:
convolving the feature data collection corresponding to each preset face region with the weight value corresponding to the preset face region through the neural network to obtain the weight-containing feature data corresponding to each preset face region;
and summing all the weighted feature data, and then carrying out normalization processing to generate an identification coefficient.
4. The face recognition method according to claim 3, wherein before inputting the feature data sets corresponding to the preset face regions into a neural network for weighting, and then calculating a recognition coefficient according to the weighted feature data sets, the method further comprises:
acquiring collected historical face image information according to the received identification instruction; the identification instruction comprises a target face part;
extracting feature data of the target face part in the historical face image information to obtain sample data;
inputting the sample data into the neural network for iterative training to increase the weight of the target face part in the neural network and obtain an updated neural network.
5. The method of claim 1, wherein after inputting the facial image feature data into a recognition degree model for face recognition degree calculation to obtain a recognition degree coefficient, the method further comprises:
and if the identification degree coefficient is smaller than a preset threshold value, acquiring the face image information of the person to be identified again.
6. The face recognition method according to claim 1, wherein the face image information is visible light image information or infrared image information.
7. The face recognition method according to claim 1, wherein the comparing the face image feature data and the face ultrasonic feature data with each feature data template in a preset feature database respectively to obtain the identification information of the person to be recognized specifically comprises:
clustering the facial image feature data and the facial ultrasonic feature data with each feature data template in a preset feature database respectively through a clustering method, and determining designated users corresponding to clustering centers of the facial image feature data and the facial ultrasonic feature data as target users;
and outputting the identity information of the target user.
8. The face recognition method according to claim 7, wherein after clustering the face image feature data and the face ultrasonic feature data with each feature data template in a preset feature database by a clustering method, the method further comprises:
and if the clustering centers of the face image characteristic data and the face ultrasonic characteristic data do not have corresponding designated users, outputting a result of failed recognition.
9. A face recognition apparatus, comprising:
the face image information acquisition module is used for acquiring face image information of a person to be identified;
the first feature extraction module is used for extracting image features of the face image information to obtain face image feature data of the person to be identified;
the coefficient calculation module is used for inputting the facial image feature data into an identification degree model to calculate the facial identifiability so as to obtain an identification degree coefficient;
the ultrasonic signal acquisition module is used for acquiring an ultrasonic signal reflected by the face of the person to be identified if the identification degree coefficient is greater than a preset threshold value;
the second feature extraction module is used for extracting the face ultrasonic feature data of the figure to be recognized from the acquired ultrasonic signals;
the identification module is used for respectively comparing the face image characteristic data and the face ultrasonic characteristic data with each characteristic data template in a preset characteristic database to obtain the identity identification information of the person to be identified; the feature data template comprises identity information of a designated user, and image feature data and ultrasonic feature data corresponding to the face of the designated user.
10. A terminal device comprising a processor, a memory, and a computer program stored in the memory and configured to be executed by the processor, the processor implementing the face recognition method according to any one of claims 1 to 8 when executing the computer program.
CN202210407077.5A 2022-04-19 2022-04-19 Face recognition method and device and terminal equipment Active CN114511914B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210407077.5A CN114511914B (en) 2022-04-19 2022-04-19 Face recognition method and device and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210407077.5A CN114511914B (en) 2022-04-19 2022-04-19 Face recognition method and device and terminal equipment

Publications (2)

Publication Number Publication Date
CN114511914A true CN114511914A (en) 2022-05-17
CN114511914B CN114511914B (en) 2022-08-02

Family

ID=81555409

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210407077.5A Active CN114511914B (en) 2022-04-19 2022-04-19 Face recognition method and device and terminal equipment

Country Status (1)

Country Link
CN (1) CN114511914B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100150450A1 (en) * 2008-12-11 2010-06-17 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and image capturing apparatus
CN106980836A (en) * 2017-03-28 2017-07-25 北京小米移动软件有限公司 Auth method and device
CN110135362A (en) * 2019-05-19 2019-08-16 北京深醒科技有限公司 A kind of fast face recognition method based under infrared camera
CN110688973A (en) * 2019-09-30 2020-01-14 Oppo广东移动通信有限公司 Equipment control method and related product
CN111738078A (en) * 2020-05-19 2020-10-02 云知声智能科技股份有限公司 Face recognition method and device
CN112183219A (en) * 2020-09-03 2021-01-05 广州市标准化研究院 Public safety video monitoring method and system based on face recognition
CN112613432A (en) * 2020-12-28 2021-04-06 杭州海关技术中心 Customs inspection system for 'water visitor' judgment based on face-human eye detection

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100150450A1 (en) * 2008-12-11 2010-06-17 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and image capturing apparatus
CN106980836A (en) * 2017-03-28 2017-07-25 北京小米移动软件有限公司 Auth method and device
CN110135362A (en) * 2019-05-19 2019-08-16 北京深醒科技有限公司 A kind of fast face recognition method based under infrared camera
CN110688973A (en) * 2019-09-30 2020-01-14 Oppo广东移动通信有限公司 Equipment control method and related product
CN111738078A (en) * 2020-05-19 2020-10-02 云知声智能科技股份有限公司 Face recognition method and device
CN112183219A (en) * 2020-09-03 2021-01-05 广州市标准化研究院 Public safety video monitoring method and system based on face recognition
CN112613432A (en) * 2020-12-28 2021-04-06 杭州海关技术中心 Customs inspection system for 'water visitor' judgment based on face-human eye detection

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
晏清微: "人脸检测方法的研究", 《计算机安全》 *

Also Published As

Publication number Publication date
CN114511914B (en) 2022-08-02

Similar Documents

Publication Publication Date Title
US8295556B2 (en) Apparatus and method for determining line-of-sight direction in a face image and controlling camera operations therefrom
EP2009577B1 (en) Image-processing apparatus and method, program, and storage medium
CN110245561B (en) Face recognition method and device
KR101426952B1 (en) Information processing apparatus, information processing method, person identification apparatus, and method of producing/updating dictionary data in person identification apparatus
US5835616A (en) Face detection using templates
WO2019137131A1 (en) Image processing method, apparatus, storage medium, and electronic device
US10529103B2 (en) Image processing apparatus and method for collating a plurality of images
CN108229369A (en) Image capturing method, device, storage medium and electronic equipment
CN112639871B (en) Biometric authentication system, biometric authentication method, and recording medium
CN112364827B (en) Face recognition method, device, computer equipment and storage medium
KR102215522B1 (en) System and method for authenticating user
JP5648452B2 (en) Image processing program and image processing apparatus
CN110929555B (en) Face recognition method and electronic device using same
CN116128814A (en) Standardized acquisition method and related device for tongue diagnosis image
KR102369152B1 (en) Realtime Pose recognition system using artificial intelligence and recognition method
CN114511914B (en) Face recognition method and device and terminal equipment
CN113239774A (en) Video face recognition system and method
CN113837006A (en) Face recognition method and device, storage medium and electronic equipment
CN117690583A (en) Internet of things-based rehabilitation and nursing interactive management system and method
CN114529962A (en) Image feature processing method and device, electronic equipment and storage medium
CN205541026U (en) Double - circuit entrance guard device
CN110148234A (en) Campus brush face picks exchange method, storage medium and system
Amjed et al. Noncircular iris segmentation based on weighted adaptive hough transform using smartphone database
CN113038257B (en) Volume adjusting method and device, smart television and computer readable storage medium
CN114550266A (en) Face recognition method and device, intelligent door lock and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant