CN113420663B - Child face recognition method and system - Google Patents
Child face recognition method and system Download PDFInfo
- Publication number
- CN113420663B CN113420663B CN202110698225.9A CN202110698225A CN113420663B CN 113420663 B CN113420663 B CN 113420663B CN 202110698225 A CN202110698225 A CN 202110698225A CN 113420663 B CN113420663 B CN 113420663B
- Authority
- CN
- China
- Prior art keywords
- lip
- model
- feature
- face
- characteristic
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Landscapes
- Image Analysis (AREA)
Abstract
The application relates to a method and a system for recognizing a child face, wherein the method comprises the steps of responding to an acquired image, analyzing the image and determining that the image comprises a portrait; extracting and analyzing the portrait to obtain a plurality of characteristic areas contained in the portrait; analyzing each characteristic region to obtain a corresponding characteristic model; comparing the plurality of characteristic models with the corresponding standard comparison models individually to obtain a plurality of comparison results, and sending out a notification instruction for confirming the passing of the comparison results when all the comparison results are the same. The method and the device are used for face recognition of the children, and are beneficial to improving the accuracy of the faces of the children in the recognition process.
Description
Technical Field
The application relates to the technical field of biological recognition, in particular to a child face recognition method and system.
Background
With the continuous progress of scientific technology and the urgent need of rapid and effective automatic identity verification in all aspects of society, biometric identification technology has been rapidly developed and applied in recent decades. Among them, the face recognition technology is a very popular research subject, but the current face recognition technology has some problems, one of which is that the recognition rate of face recognition is greatly affected by age. In face recognition, the difference in face between different individuals is often smaller than that of the same individual under different conditions, which is particularly common in face recognition across ages. Therefore, a method for identifying the faces of children with accurate identification is needed.
Disclosure of Invention
The application provides a method and a system for recognizing the face of a child, which are beneficial to improving the accuracy of the face of the child in the recognition process.
In a first aspect, the present application provides a method for recognizing a child face, including:
responding to the acquired image, analyzing the image and determining that the image comprises a portrait;
extracting and analyzing the portrait to obtain a plurality of characteristic areas contained in the portrait;
analyzing each characteristic region to obtain a corresponding characteristic model;
comparing the plurality of characteristic models with the corresponding standard comparison models individually to obtain a plurality of comparison results; and
and when all comparison results are the same, sending a notification instruction for confirming passing.
By adopting the technical scheme, whether the portrait in the image is consistent with that in the standard comparison model or not is determined by respectively comparing the characteristic models, and compared with the overall comparison mode, the mode can reduce the influence of the face change on the comparison result and is beneficial to improving the identification accuracy.
In a possible implementation manner of the first aspect, when the comparison result between the feature model and the standard comparison model corresponding to the feature model is inconsistent, the method further includes:
carrying out primary deformation processing on the characteristic model, wherein the primary deformation processing comprises equal-scale amplification, equal-scale reduction and rotation;
acquiring the generation time of a standard comparison model corresponding to the characteristic model, and recording as first generation time;
acquiring the generation time of an image comprising a portrait and recording the generation time as second generation time;
calculating the time length of the first generation time and the second generation time; and
selecting a proper deformation model according to the time length and carrying out secondary deformation processing on the characteristic model by using the deformation model;
and comparing the characteristic model subjected to the secondary deformation processing with a corresponding standard comparison model.
By adopting the technical scheme, the influence of time on the comparison result can be reduced by carrying out appropriate primary deformation processing on the characteristic model, and the accuracy of identification is improved.
In a possible implementation manner of the first aspect, in the process of acquiring the feature region, the method further includes:
acquiring a plurality of preset recognition feature models; and
and identifying the characteristic region on the portrait according to a preset identification characteristic model.
By adopting the technical scheme, the characteristic region is obtained through the preset identification characteristic model, and the accuracy of characteristic region identification can be improved.
In a second aspect, the present application provides a method for recognizing a child face, including:
responding to the acquired image, analyzing the image and determining that the image comprises a portrait;
extracting and analyzing the portrait to obtain a plurality of characteristic areas contained in the portrait;
analyzing each characteristic region to obtain a corresponding characteristic model;
comparing the plurality of characteristic models with the corresponding standard comparison models individually to obtain a plurality of similarity values; and
and when all the similarity values exceed or are equal to a first set threshold value or exceed or are equal to a second set threshold value after weighted calculation, sending a notification instruction for confirming the passing.
By adopting the technical scheme, whether the portrait in the image is consistent with that in the standard comparison model or not is determined by respectively comparing the characteristic models, and compared with the overall comparison mode, the mode can reduce the influence of the face change on the comparison result and is beneficial to improving the identification accuracy.
In a possible implementation manner of the second aspect, the extracting and analyzing the human image, where the feature regions in the step of obtaining a plurality of feature regions included in the human image include lip regions, specifically includes:
generating a transverse middle shaft and a longitudinal middle shaft based on the relative positions of an upper lip and a lower lip, wherein the upper lip and the lower lip are symmetrical based on the transverse middle shaft, and the left lobe and the right lobe of the upper lip and the left lobe and the right lobe of the lower lip are symmetrical based on the longitudinal middle shaft;
defining a plurality of perilip image regions each including a lip region surrounding the lip region in the lip region image to generate perilip image information, and defining the perilip region based on a size of the defined perilip region; the lip periphery image region is a feature region, and comprises a lip periphery image region on the lower edge of the lip, a lip periphery image region on the upper edge of the lip, a lip periphery image region on the left edge of the lip and a lip periphery image region on the right edge of the lip;
the lip surface region surrounded by the lip periphery image region acquires the whole lip surface information and the lip surface grain information, wherein the whole lip surface information comprises the color information and the gloss information of the lip.
By adopting the technical scheme, because the lips of most people are vertically symmetrical and horizontally symmetrical, the characteristic points are picked up based on the relative positions of the upper lip and the lower lip according to the presetting, and the transverse middle shaft and the longitudinal middle shaft are generated. In the perilip image region obtained by image recognition, the recognition method may be to define the perilip by the contrast between the lip color and the peripheral skin. The lip periphery image area contains a lip upper edge vertex, a lip lower edge vertex, a lip left edge vertex and a lip right edge vertex respectively, so that the distance judgment of the features on the lip surface can be conveniently carried out in the later stage. Different crowd lip and lip gloss all have the specificity, and lip line and fingerprint are the same, and every people all has obvious difference, compares in fingerprint and face line, and lip line length is longer, the interval is bigger, the degree of depth is darker, the width is wider, and the contrast is obvious, can be more accurately by visual system collection.
In a possible implementation manner of the second aspect, the step of analyzing each feature region to obtain the feature of the feature model corresponding to the feature region includes a lip region, and the step specifically includes:
obtaining a lip-shaped transverse width, a lip-shaped longitudinal width and a lip-shaped outline based on the lip circumference image area;
dividing the lines into longitudinal lines and transverse lines based on the lip line information;
dividing the longitudinal lines into four characteristic groups based on the transverse middle axis and the longitudinal middle axis, and sequencing the longitudinal lines based on the distance between the longitudinal lines and the longitudinal middle axis;
and calculating the number of cross points corresponding to each longitudinal texture and each transverse texture and the relative distance of each longitudinal texture relative to the lip-shaped image area, and taking the number of the cross points and the relative distance as the characteristics of the corresponding longitudinal textures.
In a possible implementation manner of the second aspect, the step of comparing the plurality of feature models with the corresponding standard comparison models individually to obtain a plurality of similarity values includes the step of:
traversing each piece of face information in the database as a standard comparison model, and acquiring a feature group corresponding to the standard comparison model;
comparing the standard comparison model with the lip and face overall information corresponding to the feature model of the face to be recognized, and obtaining a first-class similarity value based on the comparison result;
and comparing the features of the feature group corresponding to the standard comparison model and the feature model of the face to be recognized with the lip periphery image information, and obtaining a second-class similarity value based on the comparison result.
Through adopting above-mentioned technical scheme, lip horizontal width, vertical width and lip profile are everyone's specificity characteristic, though all can take place certain deformation under different facial expressions, but still can be used to supplementary face identification, only need corresponding similarity value of down regulating can. For the lip, although the trend of the lip under different facial expressions changes relatively, the intersection points between the lips do not change, so the number of the intersection points can be extracted as the features of the lip to be compared with the database, and the similarity value is obtained. Furthermore, the degree of stretch is linear throughout the lips under different expressions, so the relative position of the lip print to the longitudinal center axis can be used as a lip print feature to compare against a database to derive a similarity value.
In a possible implementation manner of the second aspect, the step of comparing the features of the feature group corresponding to the standard comparison model and the feature model of the face to be recognized with the lip periphery image information, and deriving the second-class similarity value based on the comparison result further includes:
pairing adjacent longitudinal grains in the same characteristic group, and calculating the relative distance of two opposite grains in each pair of longitudinal grains;
calculating the ratio of the relative distances corresponding to a plurality of continuous longitudinal lines, and taking the ratio of the relative distances as a characteristic ratio;
and comparing the standard feature ratio with the feature ratio to be compared, and obtaining a second-class similarity value based on the comparison result, wherein the standard feature ratio is the feature ratio of the feature group corresponding to the standard comparison model, and the feature ratio to be compared is the feature ratio of the feature group corresponding to the feature model of the face to be compared.
By adopting the technical scheme, when the lip is stretched, the relative distance ratio of two adjacent pairs of longitudinal lines is basically unchanged, so that the relative distance ratio can be used as a characteristic ratio for calculating the second-class similarity value.
In a possible implementation manner of the second aspect, the step of comparing the features of the feature group corresponding to the standard comparison model and the feature model of the face to be recognized, and obtaining the second-class similarity value based on the comparison result further includes:
and sequentially comparing the number of the cross points of the longitudinal grains in each feature group corresponding to the standard comparison model and the feature model of the face to be recognized, and obtaining a second-class similarity value based on the relative size of the number of the cross points.
By adopting the technical scheme, for the lip veins, although the trends of the lip veins under different facial expressions can be relatively changed, the gloss and the color of the lip surface can be changed under the condition of makeup, and the depth can be changed, the cross points between the lip veins cannot be changed, so that the number of the cross points can be extracted as the features of the lip veins to be compared with a database, and the similarity value is obtained.
In a possible implementation manner of the second aspect, before performing weighted calculation on all the similarity values, weights of the first-class similarity value and the second-class similarity value are adjusted based on the lip and face whole information of the face to be recognized.
In a possible implementation manner of the second aspect, the step of adjusting the weights of the first class similarity value and the second class similarity value based on the lip and face whole information of the face to be recognized includes:
comparing the color information corresponding to the face to be recognized and the standard contrast model, and if the difference exceeds a preset threshold value, reducing the weight of the first-class similarity value and improving the weight of the second-class similarity value;
and/or comparing the gloss information corresponding to the face to be recognized and the standard contrast model, and if the difference exceeds a preset threshold value, reducing the weight of the first-class similarity value and improving the weight of the second-class similarity value.
By adopting the technical scheme, when the illumination condition is changed or the lip of the child is coated with makeup, such as lip balm, the whole lip surface information is obviously changed, and the identification efficiency is influenced by giving a larger weight to the lip surface information, so that the weight of the similarity value is adjusted according to the difference degree between the color information and the gloss information and the corresponding information of the standard contrast model, and the identification efficiency is improved.
In a possible implementation manner of the second aspect, when a similarity value obtained by comparing the feature model with a standard comparison model corresponding to the feature model is smaller than a first set threshold, the method further includes:
carrying out primary deformation processing on the characteristic model, wherein the primary deformation processing comprises equal-scale amplification, equal-scale reduction and rotation;
acquiring the generation time of a standard comparison model corresponding to the characteristic model, and recording as first generation time;
acquiring the generation time of an image comprising a portrait and recording the generation time as second generation time;
calculating the time length of the first generation time and the second generation time; and
selecting a proper deformation model according to the time length and carrying out secondary deformation processing on the characteristic model by using the deformation model;
and comparing the characteristic model subjected to the secondary deformation processing with a corresponding standard comparison model.
By adopting the technical scheme, the influence of time on the comparison result can be reduced by carrying out appropriate primary deformation processing on the characteristic model, and the accuracy of identification is improved.
In a third aspect, the present application provides a child face recognition apparatus, including:
the first processing unit is used for responding to the acquired image, analyzing the image and determining that the image comprises a portrait;
the second processing unit is used for extracting and analyzing the portrait and acquiring a plurality of characteristic areas contained in the portrait;
the third processing unit is used for analyzing each characteristic area to obtain a characteristic model corresponding to each characteristic area;
the fourth comparison unit is used for comparing the plurality of characteristic models with the corresponding standard comparison models individually to obtain a plurality of similarity values; and
and the communication unit is used for sending a notification instruction for passing confirmation when all the similarity values exceed or are equal to a first set threshold value or exceed or are equal to a second set threshold value after weighted calculation is carried out on all the similarity values.
In a fourth aspect, the present application provides a child face recognition system, the system comprising:
one or more memories for storing instructions; and
one or more processors, configured to call and execute the instructions from the memory, and execute the method for recognizing a child's face as described in the first aspect and any possible implementation manner of the first aspect.
In a fifth aspect, the present application provides a child face recognition system, the system comprising:
one or more memories for storing instructions; and
one or more processors, configured to call and execute the instructions from the memory, and perform the method for recognizing a child's face as described in the second aspect and any possible implementation manner of the second aspect.
In a sixth aspect, the present application provides a computer-readable storage medium comprising:
a program that, when executed by a processor, performs a method for child face recognition as described in the first aspect and any possible implementation manner of the first aspect.
In a seventh aspect, the present application provides a computer-readable storage medium, comprising:
a program which, when executed by a processor, performs a method of child face recognition as described in the second aspect and any possible implementation manner of the second aspect.
In an eighth aspect, the present application provides a computer program product, which includes program instructions, and when the program instructions are executed by a computing device, the method for recognizing a child's face as described in the first aspect and any possible implementation manner of the first aspect is executed.
In a ninth aspect, the present application provides a computer program product comprising program instructions that, when executed by a computing device, perform a method for child face recognition as described in the second aspect and any possible implementation manner of the second aspect.
In a tenth aspect, the present application provides a system on a chip comprising a processor configured to perform the functions recited in the preceding aspects, such as generating, receiving, transmitting, or processing data and/or information recited in the preceding methods.
The chip system may be formed by a chip, or may include a chip and other discrete devices.
In one possible design, the system-on-chip further includes a memory for storing necessary program instructions and data. The processor and the memory may be decoupled, disposed on different devices, connected in a wired or wireless manner, or coupled on the same device.
Drawings
Fig. 1 is a flow chart of a method for recognizing a child face according to an embodiment of the present disclosure;
fig. 2 is a block flow diagram of S502 provided in an embodiment of the present application;
fig. 3 is a block diagram of a flow of S503 provided in an embodiment of the present application;
fig. 4 is a block flow diagram of S504 provided in the embodiments of the present application;
fig. 5 is a block diagram of a process of S5043 provided in the embodiment of the present application.
Detailed Description
The technical solution of the present application will be further described in detail with reference to the accompanying fig. 1-5.
The face recognition mainly comprises five stages of image acquisition, face detection, image preprocessing, feature extraction, matching and recognition, and firstly, the five stages are simply introduced.
The image acquisition is to acquire a face image through a camera lens, for example, when a user is in a shooting range of the acquisition device, the acquisition device automatically searches and shoots the face image of the user.
In practice, face detection is mainly used for preprocessing of face recognition, namely, the position and the size of a face are accurately calibrated in an image, and useful information in the face is selected.
The image preprocessing functions in light compensation, gray level transformation, histogram equalization, normalization, geometric correction, filtering, noise filtering, sharpening and the like of the stacked images, and is a process of processing the images and finally serving for feature extraction based on a face detection result.
The image feature extraction is also called human face characterization, and is a process for carrying out feature modeling on a human face, and aims to extract features (visual features, pixel statistical features, human face image transformation coefficient features, human face image algebraic features and the like) which can be used by a human face recognition system and provide enough reference data for subsequent matching and recognition.
The matching and identifying is to search and match the extracted feature data of the face image with a feature template stored in a database, and by setting a threshold, when the similarity exceeds the threshold, outputting the result obtained by matching; the face recognition is to compare the face features to be recognized with the obtained face feature template, and judge the identity information of the face according to the similarity degree.
In the whole identification process, the core is to judge whether the collected portrait and the stored portrait belong to the same person, but for children, the speed of facial change is relatively high, and in the judgment process, the judgment error is easy to occur.
The embodiment of the application provides a child face recognition method, in the recognition process, partial areas on a portrait can be extracted to form characteristic areas, then the characteristic areas are compared with stored comparison models, the portrait is not compared integrally, and therefore the negative influence of face change on the recognition result is solved.
Referring to fig. 1, a method for recognizing a child face disclosed in an embodiment of the present application includes the following steps:
s101, responding to the acquired image, analyzing the image and determining that the image comprises a portrait;
s102, extracting and analyzing the portrait to obtain a plurality of characteristic areas contained in the portrait;
s103, analyzing each characteristic region to obtain a corresponding characteristic model;
s104, comparing the plurality of characteristic models with the corresponding standard comparison models individually to obtain a plurality of comparison results;
and S105, when all comparison results are the same, sending a notification instruction for passing confirmation.
The child face recognition method disclosed by the embodiment of the application is applied to a server or an intelligent terminal and the like, equipment such as a camera and the like is responsible for acquiring images, then the images are sent to the server or the intelligent terminal and the like for processing, and finally, a judgment result is given by the server or the intelligent terminal and the like.
Specifically, in step S101, when the server or the smart terminal receives an image, the server or the smart terminal starts analyzing the image in response to the acquired image, and the purpose of the analysis is to determine whether or not there is a portrait in the image.
If the image exists, the subsequent steps are carried out, and if the image does not exist, a re-acquisition instruction needs to be sent until the image appears.
In some possible implementations, the re-acquired instruction is performed by voice or text, for example, a speaker or a display screen is placed beside the camera or other devices, the speaker can make a sound, and text can be displayed on the display screen.
In other possible implementations, an indicator light is installed on a camera or other devices, and reference information may be provided by the color of the indicator light, for example, red indicates that acquisition is in progress, and green indicates that acquisition is successful.
Step S102 is then performed, in which the portrait in the image is extracted and analyzed, and the purpose of the extraction is to extract the portrait from the image, so that only the portrait needs to be analyzed, and not the entire image.
The analysis process mainly provides reference materials for subsequent comparison, so that in the analysis process, light compensation, gray level transformation, histogram equalization, normalization, geometric correction, filtering, noise filtering, sharpening and the like need to be carried out on the image, and the purpose is to remove part of interference factors and conveniently carry out the subsequent identification process.
In addition, in the analysis process, the portrait is also decomposed to obtain a plurality of characteristic regions contained in the portrait, the characteristic regions belong to the portrait, but in the subsequent comparison process, the characteristic regions influence the final recognition result through single comparison.
Then, step S103 is executed, in which each feature region is analyzed to obtain a corresponding feature model.
It should be understood that images generally have three major features: color, texture and shape, for all three major features, may be used. The collected image can be regarded as being composed of one pixel point, each pixel point is colored, but the colors are not generally used as the reference elements for identification in consideration of different collection devices, different image processing modes, differences among algorithms and the like.
Texture includes both the texture of an object surface in the usual sense, i.e. the surface of an object exhibits uneven grooves, and also the colour patterns on a smooth surface of an object, which we refer more generally to as motifs. In the case of the portrait, the shape may be understood as a face shape, a mouth shape, an eye shape, and the like, and the face shape, the mouth shape, the eye shape, and the like may be different from person to person.
By analyzing the characteristic region, a characteristic model corresponding to the characteristic region can be obtained, wherein the characteristic model comprises texture and shape, and then the characteristic model can be compared with a standard comparison model corresponding to the characteristic model, so that final judgment is carried out.
And step S104 is executed, in which the plurality of feature models are individually compared with the corresponding standard comparison models to obtain a plurality of comparison results, the standard comparison models are pre-stored in the system, and the generation process is based on the face image collected by the user and constructed according to the face image.
The comparison between the feature model and the standard comparison model corresponding to the feature model has two results, i.e. consistency and inconsistency, and if all the results are consistent, the process proceeds to step S105, in which the server or the intelligent terminal sends out a notification instruction for confirming that the pass is passed, i.e. the portrait in the acquired image and the portrait stored in advance belong to the same person.
If one or more inconsistent results are obtained, the recognition process is terminated in step S104, which mainly aims to ensure the accuracy of the recognition result, and it should be understood that if one inconsistent result is obtained, there is a possibility that the portrait in the captured image and the pre-stored portrait do not belong to the same person, and at this time, the recognition process needs to be stopped and a notification instruction for failure is given.
On the whole, in order to solve the problem that the change speed of the face of the child is high, but the identification is difficult, the method adopts a local comparison mode to replace the whole comparison, because the face of the child is changed, but the partial area, such as eye shape, nose shape, lip shape and the like, can not be changed greatly, so that whether the collected portrait and the corresponding standard comparison model belong to the same person or not can be determined by means of the local comparison mode, the comparison is carried out through a plurality of local parts, and the accuracy degree of the identification can be effectively ensured.
It should be understood that the growth rate of the child is fast, and the change rate of the face is also fast, so in the comparison process, a situation of inconsistent comparison may occur, and at this time, appropriate processing needs to be performed on the feature model to reduce the influence of the change rate of the face on the recognition accuracy.
As a specific implementation manner of the method for recognizing a child face, when a comparison result between a feature model and a standard comparison model corresponding to the feature model is inconsistent, the method further includes:
s201, carrying out primary deformation processing on the feature model, wherein the primary deformation processing comprises equal-scale amplification, equal-scale reduction and rotation;
and S202, comparing the characteristic model subjected to the secondary deformation processing with a corresponding standard comparison model.
Specifically, the feature model is properly adjusted according to the actual situation, a proper deformation model needs to be established in a specific adjustment mode, and the deformation model is generated according to statistics after big data collection, so that the influence of time on the face of the child is reflected.
In the process of introducing the deformation model, the influence of the time length is also considered, and the specific operation is as follows:
s301, acquiring generation time of a standard comparison model corresponding to the characteristic model, and recording the generation time as first generation time;
s302, acquiring the generation time of the image including the portrait, and recording as a second generation time;
s303, calculating the time length of the first generation time and the second generation time;
and S304, selecting a proper deformation model according to the time length.
For example, there are a plurality of deformation models, each of which has an applicable time period, and then an appropriate deformation model can be selected according to the obtained time period.
For one deformation process, there are three modes of equal-scale enlargement, equal-scale reduction and rotation, and it should be understood that there are many deformation modes, but the more types are used, the more uncontrollable factors are present, and therefore, it is necessary to limit one deformation process within a reasonable range.
Then, in the embodiment of the present application, three types of scaling up, scaling down and rotation are provided, where the scaling up and scaling down adjusts the size of the feature model, and the rotation adjusts the tilt angle of the feature model, so that the feature model subjected to the primary deformation processing can be overlapped or partially overlapped with the corresponding standard comparison model through scaling up, scaling down and rotation, and if the ratio of the overlapped part meets the requirement, the feature model subjected to the primary deformation processing can be considered to be similar to the corresponding standard comparison model.
Meanwhile, in order to further improve the accuracy of recognition, the same primary deformation processing mode needs to be used for a plurality of feature regions belonging to the same portrait, for example, one feature model uses two processing modes of equal-scale amplification and rotation, and the other feature models also need to use two processing modes of equal-scale amplification and rotation.
As a specific implementation manner of the method for recognizing a child face, the following steps are used in the process of acquiring the feature region:
s401, acquiring a plurality of preset recognition feature models;
s402, identifying a characteristic region on the portrait according to a preset identification characteristic model.
Specifically, the feature region is selected from the portrait by using preset identification feature models, for example, the preset identification feature models include eye shapes, mouth shapes, nose shapes, face contours, ears, or other parts, and the preset identification feature models can be selected in a targeted manner, instead of being selected randomly by a server or a smart terminal during the processing.
Compared with a random selection mode, the mode of using the preset identification feature model can limit the selection range to a certain range, and it should be understood that the selection range can be within the coverage range of the preset identification feature model, which indicates that more identification factors exist in the feature region, or has higher identification value.
When a preset identification feature model is used, a fuzzy algorithm can be used as an algorithm for identifying a feature region, and the basic processes of common fuzzy algorithms such as mean fuzzy, gaussian fuzzy and the like are all to calculate the accumulated sum and the corresponding weight of a certain feature value of a related pixel in a certain field around a pixel, and then obtain a result value.
After the characteristic region which is consistent with the preset identification characteristic model is identified, the characteristic model is generated according to the data in the characteristic region, and then the subsequent face identification process can be carried out.
Referring to fig. 1, an embodiment of the present application further provides a child face recognition method, including the following steps:
s501, responding to the acquired image, analyzing the image and determining that the image comprises a portrait;
s502, extracting and analyzing the portrait to obtain a plurality of characteristic areas contained in the portrait;
s503, analyzing each characteristic region to obtain a corresponding characteristic model;
s504, comparing the plurality of characteristic models with corresponding standard comparison models individually to obtain a plurality of similarity values;
and S505, when all the similarity values exceed or exceed the first set threshold or all the similarity values are subjected to weighted calculation and then exceed or exceed the second set threshold, sending a notification instruction for passing confirmation.
Specifically, the difference between steps S501 to S505 and steps S101 to S105 lies in the final determination method, and in step S505, the similarity is used as the basis for determination, rather than the entire feature model completely matching the corresponding standard contrast model.
It should be understood that there are various interference factors during the human image acquisition, and at the same time, the partial feature model also changes in a small amplitude within the allowable range, so these factors should be taken into account during the identification process.
For the judgment of the similarity, the feature model can be expressed in a point mode, for example, for texture and shape, the feature model can be expressed in a point set mode. For the points in the two point sets, the criterion of the degree of coincidence is used for judging, for example, two points belonging to the same position are considered to be coincident if they are coincident or the distance between the two points is within an allowable range, and the two point sets can be considered to be the same when the ratio of the coincident points in the two point sets exceeds a set value.
Accordingly, the relationship between the feature models corresponding to the two point sets and the standard contrast model is also similar, and the number of feature models is multiple, so that here, a similarity value is introduced for evaluation, and when there is one feature model having a similar relationship with the corresponding standard contrast model, the similarity value is 1.
In an identification process, if the maximum value of the similarity value is 10 and the set similarity threshold value is 9, the obtained similarity value is 9 or 10 in the comparison process, and the acquired portrait and the stored portrait belong to the same person; if the obtained similarity value is less than 9, the captured portrait is considered not to belong to the same person as the stored portrait.
In addition, it can also be performed by using a weighting calculation method, for example, different weights are given to different feature models, and when the relationship between a feature model and a corresponding standard contrast model is similar, the method proceeds to the subsequent weighting calculation step, for example,
the number of similar relations between the feature model and the corresponding standard comparison model is ten, which means that the weights of the feature model are respectively: 5%, 8%, 7%, 14%, 7%, 32%, 5%, 5%, 9%, 8%, the formula is as follows:
1*5%+1*8%+1*7%+1*14%+1*7%+1*32%+1*5%+1*5%+1*9%+1*8%;
and obtaining a numerical value after calculation, and if the numerical value is greater than or equal to a second set threshold value, considering that the acquired portrait and the stored portrait belong to the same person, and otherwise, considering that the acquired portrait and the stored portrait do not belong to the same person.
It should be understood that the growth rate of the child is fast, and the change rate of the face is also fast, so in the comparison process, a situation of inconsistent comparison may occur, and at this time, appropriate processing needs to be performed on the feature model to reduce the influence of the change rate of the face on the recognition accuracy.
Generally, when processing each face, a face recognition system needs to extract more than three thousand statistical features, wherein when making different expressions, lips of a person are relatively obvious parts. Common life makeup and heavy makeup of people, including the changes of the shapes and the colors of hairs and eyebrows, has little influence on the recognition performance of a face recognition system, because the fine changes do not cause the shielding of key organs of the face of people. In other words, these changes do not have statistical characteristics around the key point, and the recognition condition of the system is still satisfied, so that the recognition result can still be obtained quickly.
However, if features near key points of a human face, such as exaggerated makeup and sculpting (the height of the eyebrow, the nose bridge, and the cheekbone is changed by using makeup techniques such as silhouette and shadow for old makeup) are changed on a large scale, the statistical feature points adopted by the recognition system are changed on a large scale, which may significantly affect the recognition, and the system may not recognize the person.
However, for children, such as stage makeup, the decoration of eyebrows and hair is often emphasized more strongly without making up an exaggerated look for their lips, so that face recognition can be assisted based on the lip region. The lip shape, lip color and even lip line of each person are characterized, for example, each person has a certain lip line, the lip line is actually the fold of the skin of the lip part, theoretically, the lip line has individual difference with the fingerprint, so that the lip line can be used as the characteristic of individual identification or identity identification, and in fact, the lip line identification is already used as a means for case detection in some countries. Lip veins are divided into three types of main veins, auxiliary veins and temporary veins according to the clear degree and the position of the characteristics, and the former two types are generally considered to have human body identification significance.
As a specific implementation of the method for recognizing a face of a child, the feature region includes a lip region, and referring to fig. 2, S502 may include the following steps:
s5021, a transverse middle shaft and a longitudinal middle shaft are generated based on the relative positions of an upper lip and a lower lip, wherein the upper lip and the lower lip are symmetrical based on the transverse middle shaft, and a left lobe and a right lobe of the upper lip and a left lobe and a right lobe of the lower lip are symmetrical based on the longitudinal middle shaft;
s5022, defining a plurality of lip periphery image areas which respectively comprise lip areas surrounding the lip areas in the lip area image to generate lip periphery image information, and defining the lip periphery areas based on the sizes of the defined lip periphery areas; the lip periphery image region is a feature region, and comprises a lip periphery image region on the lower edge of the lip, a lip periphery image region on the upper edge of the lip, a lip periphery image region on the left edge of the lip and a lip periphery image region on the right edge of the lip;
s5023, lip surface integral information and lip surface grain information are obtained based on a lip surface region surrounded by the lip periphery image region, wherein the lip surface integral information comprises color information and gloss information of lips.
For example, the vision system can capture the perilabial area by various means. One or more lip regions are detected while acquiring an image of a face of a user. In the size of the lip-based clip picture, the region size of the lips may be recalculated. In one embodiment, an integral differential algorithm, Hough (Hough) circle or Hessian binary large object (Hessian blob) detector is used to detect the perilabial boundary. Similarly, an algorithm based on the thumping filter may be used to detect the upper lip outer and inner edges, the lower lip outer and inner edges, and the lip region may be further separated after the mouth region is removed. The perilabial area may be derived by subtracting the mouth area from the captured image. In some embodiments, the lip region may be extracted from the skin region by color or contrast based and the above mentioned regions segmented based on radians.
Because most human lips are vertically and horizontally symmetrical, according to the presetting, characteristic points are picked up based on the relative positions of the upper lip and the lower lip to generate a transverse middle shaft and a longitudinal middle shaft. In the perilip image region obtained by image recognition, the recognition method may be to define the perilip by the contrast between the lip color and the peripheral skin. The lip periphery image area contains a lip upper edge vertex, a lip lower edge vertex, a lip left edge vertex and a lip right edge vertex respectively, so that the distance judgment of the features on the lip surface can be conveniently carried out in the later stage. Different crowd lip and lip gloss all have the specificity, and lip line and fingerprint are the same, and every people all has obvious difference, compares in fingerprint and face line, and the main line and the minor line length of lip line are longer, the interval is bigger, the degree of depth is darker, the width is wider, and the contrast is obvious, can be more accurately by visual system collection. The temporary lines are generally similar to the lines of the skin, are relatively shallow and have large variations, and can be screened out from the image recognition in a filtering mode.
As a specific implementation of the method for recognizing a child' S face, referring to fig. 3, S503 may include the following steps:
s5031, obtaining a lip-shaped transverse width, a lip-shaped longitudinal width and a lip-shaped outline based on the lip periphery image area;
s5032, dividing the grain area into a longitudinal grain and a transverse grain based on the lip face grain information;
s5033, dividing the longitudinal lines into four characteristic groups based on the transverse middle axis and the longitudinal middle axis, and sequencing the longitudinal lines based on the distance between the longitudinal lines and the longitudinal middle axis;
and S5034, calculating the number of intersections corresponding to each longitudinal line and each transverse line and the relative distance between each longitudinal line and the lip-shaped image area, and taking the number of the intersections and the relative distance as the characteristics of the corresponding longitudinal lines.
The lip transverse width, the lip longitudinal width and the lip contour are all specific characteristics of each person, and although certain deformation can occur under different facial expressions, the lip transverse width, the lip longitudinal width and the lip contour can be used for assisting face recognition only by correspondingly adjusting the corresponding similarity value downwards. For the lip, although the trend of the lip under different facial expressions changes relatively, the intersection points between the lips do not change, so the number of the intersection points can be extracted as the features of the lip to be compared with the database, and the similarity value is obtained. Furthermore, the degree of stretch is linear throughout the lips under different expressions, so the relative position of the lip print to the longitudinal center axis can be used as a lip print feature to compare against a database to derive a similarity value.
As a specific implementation of the method for recognizing a child' S face, referring to fig. 4, S504 may include the following steps:
s5041, traversing each piece of face information in the database to serve as a standard comparison model, and acquiring a feature group corresponding to the standard comparison model;
s5042, comparing the standard comparison model with the lip and face integral information corresponding to the feature model of the face to be recognized, and obtaining a first-class similarity value based on a comparison result;
s5043, comparing the standard comparison model with the features of the feature group corresponding to the feature model of the face to be recognized and the lip periphery image information, and obtaining a second-class similarity value based on the comparison result.
As a specific implementation of the method for recognizing a child' S face, S5043 may include the following steps:
pairing adjacent longitudinal grains in the same characteristic group, and calculating the relative distance of two opposite grains in each pair of longitudinal grains;
calculating the ratio of the relative distances corresponding to a plurality of continuous longitudinal lines, and taking the ratio of the relative distances as a characteristic ratio;
and comparing the standard feature ratio with the feature ratio to be compared, and obtaining a second-class similarity value based on the comparison result, wherein the standard feature ratio is the feature ratio of the feature group corresponding to the standard comparison model, and the feature ratio to be compared is the feature ratio of the feature group corresponding to the feature model of the face to be compared.
As a specific implementation of the method for recognizing a child' S face, S5042 may further include:
and sequentially comparing the number of the cross points of the longitudinal grains in each feature group corresponding to the standard comparison model and the feature model of the face to be recognized, and obtaining a second-class similarity value based on the relative size of the number of the cross points.
For the lip wrinkles, although the trends of the lip wrinkles are relatively changed under different facial expressions, the gloss and the color of the lip surface are changed under the makeup condition, and the depth is changed, the cross points between the lip wrinkles are not changed, so that the number of the cross points can be extracted as the features of the lip wrinkles to be compared with the database, and the similarity value is obtained.
As a specific implementation manner of the method for recognizing a child face, when a comparison result between a feature model and a standard comparison model corresponding to the feature model is inconsistent, the method further includes:
s601, carrying out primary deformation processing on the feature model, wherein the primary deformation processing comprises equal-scale amplification, equal-scale reduction and rotation;
and S602, comparing the characteristic model subjected to the secondary deformation processing with a corresponding standard comparison model.
Specifically, the feature model is properly adjusted according to the actual situation, a proper deformation model needs to be established in a specific adjustment mode, and the deformation model is generated according to statistics after big data collection, so that the influence of time on the face of the child is reflected.
In the process of introducing the deformation model, the influence of the time length is also considered, and the specific operation is as follows:
s701, acquiring the generation time of a standard comparison model corresponding to the characteristic model, and recording as a first generation time;
s702, acquiring the generation time of the image including the portrait, and recording as a second generation time;
s703, calculating the time length of the first generation time and the second generation time;
and S704, selecting a proper deformation model according to the time length.
For example, there are a plurality of deformation models, each of which has an applicable time period, and then an appropriate deformation model can be selected according to the obtained time period.
For one deformation process, there are three modes of equal-scale enlargement, equal-scale reduction and rotation, and it should be understood that there are many deformation modes, but the more types are used, the more uncontrollable factors are present, and therefore, it is necessary to limit one deformation process within a reasonable range.
Then, in the embodiment of the present application, three types of scaling up, scaling down and rotation are provided, where the scaling up and scaling down adjusts the size of the feature model, and the rotation adjusts the tilt angle of the feature model, so that the feature model subjected to the primary deformation processing can be overlapped or partially overlapped with the corresponding standard comparison model through scaling up, scaling down and rotation, and if the ratio of the overlapped part meets the requirement, the feature model subjected to the primary deformation processing can be considered to be similar to the corresponding standard comparison model.
Meanwhile, in order to further improve the accuracy of recognition, the same primary deformation processing mode needs to be used for a plurality of feature regions belonging to the same portrait, for example, one feature model uses two processing modes of equal-scale amplification and rotation, and the other feature models also need to use two processing modes of equal-scale amplification and rotation.
And before carrying out weighting calculation on all the similarity values, adjusting the weights of the first class of similarity values and the second class of similarity values based on the lip and face integral information of the face to be recognized. The method specifically comprises the following steps:
comparing the color information corresponding to the face to be recognized and the standard contrast model, and if the difference exceeds a preset threshold value, reducing the weight of the first-class similarity value and improving the weight of the second-class similarity value;
and/or comparing the gloss information corresponding to the face to be recognized and the standard contrast model, and if the difference exceeds a preset threshold value, reducing the weight of the first-class similarity value and improving the weight of the second-class similarity value.
Because the whole lip information is obviously changed when the illumination condition is changed or the lips of the children are coated with makeup, such as lip balm, the identification efficiency is influenced by giving larger weight to the lips, and the weight of the similarity value is adjusted according to the difference degree between the color information and the gloss information and the corresponding information of the standard comparison model so as to improve the identification efficiency.
The embodiment of the present application further provides a child face recognition device, including:
the first processing unit is used for responding to the acquired image, analyzing the image and determining that the image comprises a portrait;
the second processing unit is used for extracting and analyzing the portrait and acquiring a plurality of characteristic areas contained in the portrait;
the third processing unit is used for analyzing each characteristic area to obtain a characteristic model corresponding to each characteristic area;
the fourth comparison unit is used for comparing the plurality of characteristic models with the corresponding standard comparison models individually to obtain a plurality of similarity values;
and the first communication unit is used for sending a notification instruction for passing confirmation when all the similarity values exceed or are equal to a first set threshold value or exceed or are equal to a second set threshold value after weighted calculation is carried out on all the similarity values.
In one example, the units in any of the above apparatuses may be one or more integrated circuits configured to implement the above methods, such as: one or more Application Specific Integrated Circuits (ASICs), or one or more Digital Signal Processors (DSPs), or one or more Field Programmable Gate Arrays (FPGAs), or a combination of at least two of these integrated circuit forms.
As another example, when a unit in a device may be implemented in the form of a processing element scheduler, the processing element may be a general-purpose processor, such as a Central Processing Unit (CPU) or other processor capable of invoking programs. As another example, these units may be integrated together and implemented in the form of a system-on-a-chip (SOC).
Various objects such as various messages/information/devices/network elements/systems/devices/actions/operations/procedures/concepts may be named in the present application, it is to be understood that these specific names do not constitute limitations on related objects, and the named names may vary according to circumstances, contexts, or usage habits, and the understanding of the technical meaning of the technical terms in the present application should be mainly determined by the functions and technical effects embodied/performed in the technical solutions.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It should also be understood that, in various embodiments of the present application, first, second, etc. are used merely to indicate that a plurality of objects are different. For example, the first time window and the second time window are merely to show different time windows. And should not have any influence on the time window itself, and the above-mentioned first, second, etc. should not impose any limitation on the embodiments of the present application.
It is also to be understood that the terminology and/or the description of the various embodiments herein is consistent and mutually inconsistent if no specific statement or logic conflicts exists, and that the technical features of the various embodiments may be combined to form new embodiments based on their inherent logical relationships.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a computer-readable storage medium, which includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned computer-readable storage media comprise: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The embodiment of the present application further provides a child face recognition system, the system includes:
one or more memories for storing instructions; and
one or more processors for invoking and executing the instructions from the memory to perform the child face recognition method as described above.
Embodiments of the present application also provide a computer program product, which includes instructions that, when executed, cause the terminal device and the network device to perform operations corresponding to the terminal device and the network device of the above-mentioned methods.
Embodiments of the present application further provide a chip system, which includes a processor, and is configured to implement the functions referred to in the foregoing, for example, to generate, receive, transmit, or process data and/or information referred to in the foregoing methods.
The chip system may be formed by a chip, or may include a chip and other discrete devices.
The processor mentioned in any of the above may be a CPU, a microprocessor, an ASIC, or one or more integrated circuits for controlling the execution of the program of the method for transmitting feedback information.
In one possible design, the system-on-chip further includes a memory for storing necessary program instructions and data. The processor and the memory may be decoupled, respectively disposed on different devices, and connected in a wired or wireless manner to support the chip system to implement various functions in the above embodiments. Alternatively, the processor and the memory may be coupled to the same device.
Optionally, the computer instructions are stored in a memory.
Alternatively, the memory is a storage unit in the chip, such as a register, a cache, and the like, and the memory may also be a storage unit outside the chip in the terminal, such as a ROM or other types of static storage devices that can store static information and instructions, a RAM, and the like.
It will be appreciated that the memory in the embodiments of the subject application can be either volatile memory or nonvolatile memory, or can include both volatile and nonvolatile memory.
The non-volatile memory may be ROM, Programmable Read Only Memory (PROM), Erasable PROM (EPROM), Electrically Erasable PROM (EEPROM), or flash memory.
Volatile memory can be RAM, which acts as external cache memory. There are many different types of RAM, such as Static Random Access Memory (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), Enhanced SDRAM (ESDRAM), synclink DRAM (SLDRAM), and direct memory bus RAM.
The embodiments of the present invention are preferred embodiments of the present application, and the scope of protection of the present application is not limited by the embodiments, so: all equivalent changes made according to the structure, shape and principle of the present application shall be covered by the protection scope of the present application.
Claims (8)
1. A method for recognizing children's faces is characterized by comprising the following steps:
responding to the acquired image, analyzing the image and determining that the image comprises a portrait;
extracting and analyzing the portrait to obtain a plurality of characteristic areas contained in the portrait;
analyzing each characteristic region to obtain a corresponding characteristic model;
traversing each piece of face information in the database as a standard comparison model, and acquiring a feature group corresponding to the standard comparison model;
comparing the standard comparison model with the lip and face overall information corresponding to the feature model of the face to be recognized, and obtaining a first-class similarity value based on the comparison result;
pairing adjacent longitudinal grains in the same characteristic group, and calculating the relative distance of two opposite grains in each pair of longitudinal grains;
calculating the ratio of the relative distances corresponding to a plurality of continuous longitudinal lines, and taking the ratio of the relative distances as a characteristic ratio;
comparing the standard feature ratio with the feature ratio to be compared, and obtaining a second-class similarity value based on the comparison result, wherein the standard feature ratio is the feature ratio of the feature group corresponding to the standard comparison model, and the feature ratio to be compared is the feature ratio of the feature group corresponding to the feature model of the face to be compared; and
and when all the similarity values exceed or are equal to a first set threshold value or exceed or are equal to a second set threshold value after weighted calculation, sending a notification instruction for confirming the passing.
2. The method for recognizing children's faces according to claim 1, wherein the step of extracting and analyzing the portrait to obtain a plurality of feature regions included in the portrait includes a lip region, and the step specifically includes:
generating a transverse middle shaft and a longitudinal middle shaft based on the relative positions of an upper lip and a lower lip, wherein the upper lip and the lower lip are symmetrical based on the transverse middle shaft, and the left lobe and the right lobe of the upper lip and the left lobe and the right lobe of the lower lip are symmetrical based on the longitudinal middle shaft;
defining a plurality of perilip image regions each including a lip region surrounding the lip region in the lip region image to generate perilip image information, and defining the perilip region based on a size of the defined perilip region; the lip periphery image region is a feature region, and comprises a lip periphery image region on the lower edge of the lip, a lip periphery image region on the upper edge of the lip, a lip periphery image region on the left edge of the lip and a lip periphery image region on the right edge of the lip;
the lip surface region surrounded by the lip periphery image region acquires the whole lip surface information and the lip surface grain information, wherein the whole lip surface information comprises the color information and the gloss information of the lip.
3. The method according to claim 2, wherein the feature region in the step of analyzing each feature region to obtain the feature of the feature model corresponding to the feature region includes a lip region, and the step specifically includes:
obtaining a lip-shaped transverse width, a lip-shaped longitudinal width and a lip-shaped outline based on the lip circumference image area;
dividing the lines into longitudinal lines and transverse lines based on the lip line information;
dividing the longitudinal lines into four characteristic groups based on the transverse middle axis and the longitudinal middle axis, and sequencing the longitudinal lines based on the distance between the longitudinal lines and the longitudinal middle axis;
and calculating the number of cross points corresponding to each longitudinal texture and each transverse texture and the relative distance of each longitudinal texture relative to the lip-shaped image area, and taking the number of the cross points and the relative distance as the characteristics of the corresponding longitudinal textures.
4. The method as claimed in claim 1, wherein the step of comparing the standard comparison model with the features of the feature group corresponding to the feature model of the face to be recognized to obtain the second-class similarity value based on the comparison result further comprises:
and sequentially comparing the number of the cross points of the longitudinal grains in each feature group corresponding to the standard comparison model and the feature model of the face to be recognized, and obtaining a second-class similarity value based on the relative size of the number of the cross points.
5. The method as claimed in claim 4, wherein when the similarity obtained by comparing the feature model with the corresponding standard comparison model is smaller than the first set threshold, the method further comprises:
carrying out primary deformation processing on the characteristic model, wherein the primary deformation processing comprises equal-scale amplification, equal-scale reduction and rotation;
acquiring the generation time of a standard comparison model corresponding to the characteristic model, and recording as first generation time;
acquiring the generation time of an image comprising a portrait and recording the generation time as second generation time;
calculating the time length of the first generation time and the second generation time; and
selecting a proper deformation model according to the time length and carrying out secondary deformation processing on the characteristic model by using the deformation model;
and comparing the characteristic model subjected to the secondary deformation processing with a corresponding standard comparison model.
6. The method as claimed in claim 5, wherein before performing the weighted calculation on all the similarity values, the weights of the similarity values of the first type and the second type are adjusted based on the lip and face whole information of the face to be recognized.
7. The method as claimed in claim 6, wherein the step of adjusting the weight of the first-class similarity value and the second-class similarity value based on the lip-surface overall information of the face to be recognized comprises:
comparing the color information corresponding to the face to be recognized and the standard contrast model, and if the difference exceeds a preset threshold value, reducing the weight of the first-class similarity value and improving the weight of the second-class similarity value;
and/or comparing the gloss information corresponding to the face to be recognized and the standard contrast model, and if the difference exceeds a preset threshold value, reducing the weight of the first-class similarity value and improving the weight of the second-class similarity value.
8. A child face recognition apparatus, for use in the child face recognition method according to any one of claims 1 to 7, comprising:
the first processing unit is used for responding to the acquired image, analyzing the image and determining that the image comprises a portrait;
the second processing unit is used for extracting and analyzing the portrait and acquiring a plurality of characteristic areas contained in the portrait;
the third processing unit is used for analyzing each characteristic area to obtain a characteristic model corresponding to each characteristic area;
the fourth comparison unit is used for comparing the plurality of characteristic models with the corresponding standard comparison models individually to obtain a plurality of similarity values; and
and the communication unit is used for sending a notification instruction for passing confirmation when all the similarity values exceed or are equal to a first set threshold value or exceed or are equal to a second set threshold value after weighted calculation is carried out on all the similarity values.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110698225.9A CN113420663B (en) | 2021-06-23 | 2021-06-23 | Child face recognition method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110698225.9A CN113420663B (en) | 2021-06-23 | 2021-06-23 | Child face recognition method and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113420663A CN113420663A (en) | 2021-09-21 |
CN113420663B true CN113420663B (en) | 2022-02-22 |
Family
ID=77716320
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110698225.9A Active CN113420663B (en) | 2021-06-23 | 2021-06-23 | Child face recognition method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113420663B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114155593B (en) * | 2022-02-09 | 2022-05-13 | 深圳市海清视讯科技有限公司 | Face recognition method, face recognition device, recognition terminal and storage medium |
CN115115737B (en) * | 2022-08-29 | 2023-01-06 | 深圳市海清视讯科技有限公司 | Method, apparatus, device, medium, and program product for identifying artifacts in thermal imaging |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109002799A (en) * | 2018-07-19 | 2018-12-14 | 苏州市职业大学 | Face identification method |
CN109063601A (en) * | 2018-07-13 | 2018-12-21 | 北京科莱普云技术有限公司 | Cheilogramma detection method, device, computer equipment and storage medium |
EP3428843A1 (en) * | 2017-07-14 | 2019-01-16 | GB Group plc | Improvements relating to face recognition |
CN111684459A (en) * | 2019-07-18 | 2020-09-18 | 深圳海付移通科技有限公司 | Identity authentication method, terminal equipment and storage medium |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108416336B (en) * | 2018-04-18 | 2019-01-18 | 特斯联(北京)科技有限公司 | A kind of method and system of intelligence community recognition of face |
CN112784712B (en) * | 2021-01-08 | 2023-08-18 | 重庆创通联智物联网有限公司 | Missing child early warning implementation method and device based on real-time monitoring |
-
2021
- 2021-06-23 CN CN202110698225.9A patent/CN113420663B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3428843A1 (en) * | 2017-07-14 | 2019-01-16 | GB Group plc | Improvements relating to face recognition |
CN109063601A (en) * | 2018-07-13 | 2018-12-21 | 北京科莱普云技术有限公司 | Cheilogramma detection method, device, computer equipment and storage medium |
CN109002799A (en) * | 2018-07-19 | 2018-12-14 | 苏州市职业大学 | Face identification method |
CN111684459A (en) * | 2019-07-18 | 2020-09-18 | 深圳海付移通科技有限公司 | Identity authentication method, terminal equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN113420663A (en) | 2021-09-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3321850B1 (en) | Method and apparatus with iris region extraction | |
CN110852160B (en) | Image-based biometric identification system and computer-implemented method | |
CN111539912B (en) | Health index evaluation method and equipment based on face structure positioning and storage medium | |
JP3279913B2 (en) | Person authentication device, feature point extraction device, and feature point extraction method | |
CN113420663B (en) | Child face recognition method and system | |
WO2017059591A1 (en) | Finger vein identification method and device | |
CN104077579B (en) | Facial expression recognition method based on expert system | |
CN108549886A (en) | A kind of human face in-vivo detection method and device | |
JP4414401B2 (en) | Facial feature point detection method, apparatus, and program | |
CN102902970A (en) | Iris location method | |
WO2020133863A1 (en) | Facial model generation method and apparatus, storage medium, and terminal | |
CN105279492B (en) | The method and apparatus of iris recognition | |
TWI692729B (en) | Method and device for determining pupil position | |
CN106778468A (en) | 3D face identification methods and equipment | |
CN101359365A (en) | Iris positioning method based on Maximum between-Cluster Variance and gray scale information | |
JP2007188504A (en) | Method for filtering pixel intensity in image | |
CN113436734B (en) | Tooth health assessment method, equipment and storage medium based on face structure positioning | |
CN109934118A (en) | A kind of hand back vein personal identification method | |
CN106570447A (en) | Face photo sunglass automatic removing method based on gray histogram matching | |
Abidin et al. | Iris segmentation analysis using integro-differential and hough transform in biometric system | |
CN110516661B (en) | Beautiful pupil detection method and device applied to iris recognition | |
KR20070088982A (en) | Deformation-resilient iris recognition methods | |
CN116342968B (en) | Dual-channel face recognition method and device | |
Swati et al. | Iris recognition using Gabor | |
CN115705748A (en) | Facial feature recognition system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CP01 | Change in the name or title of a patent holder | ||
CP01 | Change in the name or title of a patent holder |
Address after: 518000 Guangdong Shenzhen Baoan District Xixiang street, Wutong Development Zone, Taihua Indus Industrial Park 8, 3 floor. Patentee after: Shenzhen Haiqing Zhiyuan Technology Co.,Ltd. Address before: 518000 Guangdong Shenzhen Baoan District Xixiang street, Wutong Development Zone, Taihua Indus Industrial Park 8, 3 floor. Patentee before: SHENZHEN HIVT TECHNOLOGY Co.,Ltd. |