CN108875646B - Method and system for double comparison and authentication of real face image and identity card registration - Google Patents

Method and system for double comparison and authentication of real face image and identity card registration Download PDF

Info

Publication number
CN108875646B
CN108875646B CN201810649673.8A CN201810649673A CN108875646B CN 108875646 B CN108875646 B CN 108875646B CN 201810649673 A CN201810649673 A CN 201810649673A CN 108875646 B CN108875646 B CN 108875646B
Authority
CN
China
Prior art keywords
face image
current
feature vector
registered
registered face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810649673.8A
Other languages
Chinese (zh)
Other versions
CN108875646A (en
Inventor
陈荣琴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Civil Aviation Cares Co ltd
Original Assignee
Qingdao Civil Aviation Cares Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Civil Aviation Cares Co ltd filed Critical Qingdao Civil Aviation Cares Co ltd
Priority to CN201810649673.8A priority Critical patent/CN108875646B/en
Publication of CN108875646A publication Critical patent/CN108875646A/en
Application granted granted Critical
Publication of CN108875646B publication Critical patent/CN108875646B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Abstract

The embodiment of the application provides a double comparison authentication method for registering a real facial image and an identity card, which comprises the following steps: acquiring a current face image and a registered face image of a target human body; respectively extracting features of the current face image and the registered face image to generate corresponding texture feature vectors; modifying the texture feature vector of the registered face image by using a preset modification model to generate a modified feature vector; calculating the difference degree between the texture feature vector of the current facial image and the correction feature vector, and judging whether the value of the difference degree is smaller than a preset threshold value or not; determining that the current face image and the registered face image match when the value of the degree of dissimilarity is less than a preset threshold value. The registered face image is corrected in the identity checking process and then matched and verified with the current face image, so that the difference degree between the registered face image and the current face image is reduced, and the automatic face recognition speed is improved.

Description

Method and system for double comparison and authentication of real face image and identity card registration
Technical Field
The application relates to the technical field of image recognition, in particular to a method and a system for dual comparison authentication of a real facial image and identity card registration.
Background
In hotel accommodations, airport security checks, real-name ticket buying and other occasions, identity cards are generally registered, and in order to prevent identity disguise caused by the identity cards of other people or fake identity cards held by people, people need to check whether the identity cards conform to the identity cards or not.
At present, besides manual inspection, a real face image of a licensee can be shot, a registered face image is extracted from an identity card or remotely called by a public security database by using an identity card number, and then the real face image and the registered face image are subjected to automatic face recognition and comparison.
However, since the registered face image of the identification card is generally collected again only when the identification card is replaced and corrected, the image updating frequency is low, the hysteresis is obvious, and the real face image of the bearer is obviously deviated from the registered face image along with the time lapse and the body change, which causes many obstacles to the automatic face recognition comparison: for example, when the true face image greatly deviates from the registered face image, a false alarm occurs even if the testimony matches; for another example, some automatic inspection systems reduce the false alarm rate when the testimony matches by excessively extracting facial features and performing matching verification by using a complex correction algorithm, but this reduces the recognition speed, increases the software and hardware costs, and is not beneficial to rapid inspection.
Disclosure of Invention
In view of the above, the present application aims to provide a method and a system for dual comparison and authentication between a real face image and an identity card registration, so as to solve the technical problems in the prior art that the update frequency of the registered face image of the identity card is low, so that the automatic face recognition and comparison speed is low, the recognition error rate is high, and the quick verification is not facilitated in the identity verification process, thereby improving the speed and accuracy of the identity card registration and real face image recognition and matching process.
In one aspect of the present application, a method for performing double comparison authentication between a real face image and an identity card registration is provided, which includes:
acquiring a current face image and a registered face image of a target human body;
respectively extracting features of the current face image and the registered face image to generate corresponding texture feature vectors;
modifying the texture feature vector of the registered face image by using a preset modification model to generate a modified feature vector;
calculating the difference degree between the texture feature vector of the current facial image and the correction feature vector, and judging whether the value of the difference degree is smaller than a preset threshold value or not;
determining that the current face image and the registered face image match when the value of the degree of dissimilarity is less than a preset threshold value.
In some embodiments, said performing feature extraction on said current face image and said registered face image, respectively, to generate corresponding texture feature vectors, comprises:
respectively carrying out edge detection on the current face image and the registered face image, dividing the current face image and the registered face image into a plurality of regions according to the number of closed edges, carrying out texture recognition on each region, extracting a face organ characteristic value, and generating a texture characteristic vector of the current face image and a texture characteristic vector of the registered face image.
In some embodiments, said separately performing feature extraction on said current face image and said registered face image comprises: respectively carrying out edge detection on the current face image and the registered face image by adopting a canny edge detection operator, and extracting an image area surrounded by a closed edge, wherein the method specifically comprises the following steps:
respectively convolving the current face image and the registered face image with a Gaussian mask, and smoothing the current face image and the registered face image;
calculating the gradient of each pixel point of the smoothed current face image and the smoothed registered face image by using a Sobel operator;
reserving a maximum value of the gradient intensity on each pixel point of the current face image and the registered face image, and deleting other values;
setting a threshold upper bound and a threshold lower bound of the maximum value of the gradient intensity on each pixel point of the current facial image and the registered facial image, confirming the pixel points of which the maximum values of the gradient intensity are larger than the threshold upper bound as boundaries, confirming the pixel points of which the maximum values of the gradient intensity are larger than the threshold lower bound and smaller than the threshold upper bound as weak boundaries, and confirming the pixel points of which the maximum values of the gradient intensity are smaller than the threshold lower bound as non-boundaries;
and confirming the weak boundary connected with the boundary, and confirming other weak boundaries as non-boundaries.
In some embodiments, the predetermined modification model comprises:
and a parameter group (α 1, α 2, α 3 … … α n) for correcting the texture feature vector of the registered face image.
In some embodiments, the modifying the texture feature vector of the registered face image by using a preset modification model to generate a modified feature vector includes:
the texture feature vectors (D1, D2, D3 … … Dn) of the registered face images are corrected using a preset correction model (α 1, α 2, α 3 … … α n) to generate corrected feature vectors (D1 × 1, D2 × 1+ α 2, D3 × 1+ α 3) … … Dn (1+ α n)).
In some embodiments, the following formula is adopted to calculate the difference between the texture feature vector of the current facial image and the corrected feature vector, and whether the value of the difference is smaller than a preset threshold value is judged:
Figure BDA0001704583720000031
the texture feature vector of the current face image is (R1, R2, R3 … … Rn), and I is the degree of difference.
In some embodiments, when the value of the difference is smaller than a preset threshold, the method further includes updating the modified model by using the following formula:
Figure BDA0001704583720000032
wherein α i t Indicating the updated correction parameter of the current round, α i t-1 And i represents a correction parameter before the current round of updating, and the value range of i is 1 to n.
In some embodiments, the modification parameter is a modification parameter satisfying a preset threshold range, and the preset threshold range is (0.5, 1.5).
In some embodiments, the parameter sets in the correction model have order weight parameter sets (δ 1, δ 2, δ 3 … … δ n), and when the texture feature vectors (D1, D2, D3 … … Dn) of the registered facial images are corrected, the generated corrected feature vectors are (D1 (1+ δ 1 × α 1), D2 (1+ δ 2 × α 2), D3 (1+ δ 3 × α 3) … … Dn (1+ δ n × α n)), wherein the order weight parameter sets (δ 1, δ 2, δ 3 … … δ n) have δ n values ranging from 0.5 to 1.5, and the value of δ n gradually increases with the number of corrections.
In another aspect of the present application, there is provided a dual-comparison authentication system for registering a real face image with an identification card, including:
an image acquisition module for acquiring a current face image and a registered face image of a target human body;
the texture feature vector generation module is used for respectively extracting features of the current face image and the registered face image and generating corresponding texture feature vectors;
the texture feature vector correction module is used for correcting the texture feature vector of the registered face image by using a preset correction model to generate a corrected feature vector;
the difference degree determining module is used for calculating the difference degree of the texture feature vector of the current face image and the correction feature vector and judging whether the value of the difference degree is smaller than a preset threshold value or not;
a result output module for determining that the current face image and the registered face image match when the value of the degree of dissimilarity is less than a preset threshold value.
According to the method and the system for the dual comparison and authentication of the registration of the real facial image and the identity card, the registered facial image is corrected by the correction model at each time of authentication, then the registered facial image is matched and verified with the current facial image, and the correction model is updated by the currently acquired real facial image under the condition that the identity card is identified to be consistent in the identity verification process, so that the difference between the registered facial image and the current facial image is reduced, and the automatic facial recognition speed is increased.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
fig. 1 is a flowchart of a method for authenticating a double-comparison between a real facial image and an identity card registration according to a first embodiment of the present application;
fig. 2 is a flowchart of a method for authenticating a double-comparison between a real face image and an identity card registration according to a second embodiment of the present application;
fig. 3 is a structural diagram of a real facial image and identification card registration double-comparison authentication system according to a third embodiment of the present application;
fig. 4 is a schematic diagram of identity verification using the authentication system for dual comparison of real facial image and identity card registration according to the fourth embodiment of the present application.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not to be construed as limiting the invention. It should be noted that, for convenience of description, only the portions related to the present invention are shown in the drawings.
It should be noted that, in the present application, the embodiments and features of the embodiments may be combined with each other without conflict. The present application will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 is a flowchart of a method for performing double comparison and authentication between a real face image and an identity card registration according to an embodiment of the present application. As can be seen from the figure, the method for authenticating the registration of the real face image and the identity card by double comparison provided by the embodiment includes the following steps:
s101: a current face image and a registered face image of a target human body are acquired.
In this embodiment, the current facial image of the target human body may be captured by an image capturing device, for example, when the target human body is at a hotel accommodation, an airport security check, a real-name ticket booking, etc., the current facial image of the target human body may be captured by a camera, and since a certificate such as an identification card needs to be shown at the hotel accommodation, the airport security check, the real-name ticket booking, etc., a registered facial image may be extracted from the identification card or a registered facial image of the target human body may be remotely retrieved from a public security database by using the identification card number. In this embodiment, the target human body is a human body for identity verification, for example, if the human body a is subjected to identity verification, the human body a is the target human body. It should be noted that the above examples are intended to exemplarily describe how to acquire the current face image and the registered face image of the target person, and should not be construed as limiting the technical solution of the present application.
S102: and respectively extracting the features of the current face image and the registered face image to generate corresponding texture feature vectors.
In the present embodiment, when the current face image and the registered face image of the target human body are acquired, feature extraction needs to be performed on the current face image and the registered face image of the acquired target human body to generate corresponding texture feature vectors. Specifically, the facial image may be divided into a plurality of regions according to the number of closed edges by using an edge detection algorithm, and then texture recognition is performed on each region, feature values of face organs are extracted, and a texture feature vector of the current facial image and a texture feature vector of the registered facial image are generated. The specific method for dividing the above-mentioned face image into a plurality of regions according to the number of closed edges by using the edge detection algorithm is described in the second embodiment below, and is not described herein again. In this embodiment, the texture recognition of each region may be performed by using a face recognition method in the prior art to recognize a corresponding face organ, and then extracting a feature value of the corresponding face organ. Such as eyebrow distance, eye width, eye length, nose wing width, lip thickness, etc., and generates a texture feature vector of the current face image and a texture feature vector of the registered face image from the feature values of these face parts. In this embodiment, the texture feature vector of the current face image may be referred to as (R1, R2, R3 … … Rn), and the texture feature vector of the registered face image may be referred to as (D1, D2, D3 … … Dn).
S103: and correcting the texture feature vector of the registered face image by using a preset correction model to generate a corrected feature vector.
In this embodiment, since the registered face image is the face image of the target human body collected when the target human body transacts the id card, and the face image is not updated, and the registered face image has a large difference from the current face image along with growth and development of the human body, makeup and other factors, and is not beneficial to identity verification, the texture feature vector of the registered face image may be corrected to generate a corrected feature vector, so as to facilitate identity verification. Specifically, the preset correction model may include a parameter set (α 1, α 2, α 3 … … α n) for correcting a texture feature vector of the registered face image, and the generated correction feature vectors are (D1 × (1+ α 1), D2 × (1+ α 2), D3 × (1+ α 3) … … Dn × (1+ α n)). As will be described later, the correction parameter set used herein is obtained by updating the correction model with the currently acquired real facial image each time the identification certificate matches on the basis of an initial assignment, so that the deviation caused by the fact that the registered facial image itself is not updated can be corrected by the correction model updated in the past.
S104: and calculating the difference between the texture feature vector of the current facial image and the correction feature vector, and judging whether the value of the difference is smaller than a preset threshold value.
In this embodiment, the difference between the texture feature vector of the current face image and the corrected feature vector may be calculated by the following formula:
Figure BDA0001704583720000071
and I is the difference degree between the texture feature vector of the current face image and the corrected feature vector. And judging whether the corrected registered face image matches the current face image by judging whether the value of I is smaller than a preset threshold value.
S105: determining that the current face image and the registered face image match when the value of the degree of dissimilarity is less than a preset threshold value.
In this embodiment, when the difference I between the texture feature vector of the current face image and the corrected feature vector is smaller than a preset threshold, it is determined that the current face image matches the registered face image, and it may be further determined that the identity verification of the target human body passes, that is, the current face image of the target human body matches the registered face image.
And when the value of the difference degree is smaller than a preset threshold value, updating the correction model by using the following formula:
Figure BDA0001704583720000072
wherein α i t Indicating the updated correction parameter of the current round, α i t-1 And i represents a correction parameter before the current round of updating, and the value range of i is 1 to n.
According to the method for the double comparison and authentication of the real face image and the identity card registration, the registered face image is corrected in the identity checking process and then matched and verified with the current face image, the difference degree between the registered face image and the current face image is reduced, and therefore the speed of automatic face recognition is improved.
Fig. 2 is a flowchart of a method for authenticating a double-comparison between a real facial image and an identity card registration in the second embodiment of the present application. In this embodiment, edge detection may be performed on the current face image and the registered face image by using a canny edge detection operator, and the extracting an image region surrounded by a closed edge specifically includes:
s201: and respectively convolving the current face image and the registered face image with a Gaussian mask, and smoothing the current face image and the registered face image.
S202: and calculating the gradient of each pixel point of the smoothed current face image and the smoothed registered face image by using a Sobel operator.
S203: reserving a maximum value of the gradient intensity on each pixel point of the current face image and the registered face image, and deleting other values;
s204: setting an upper threshold limit and a lower threshold limit of the maximum value of the gradient strength on each pixel point of the current facial image and the registered facial image, confirming the pixel points of which the maximum value of the gradient strength is greater than the upper threshold limit as boundaries, confirming the pixel points of which the maximum value of the gradient strength is greater than the lower threshold limit and less than the upper threshold limit as weak boundaries, and confirming the pixel points of which the maximum value of the gradient strength is less than the lower threshold limit as non-boundaries;
s205: and confirming the weak boundary connected with the boundary, and confirming other weak boundaries as non-boundaries.
After the current face image and the registered face image are divided into a plurality of regions in the above manner, as in the first embodiment, texture recognition is performed on each of the regions, face organ feature values are extracted, and a texture feature vector of the current face image and a texture feature vector of the registered face image are generated.
Further, the texture feature vector of the registered face image is corrected in the identity checking process to generate a corrected feature vector; and then matching and verifying with the current face image. And when the value of the difference degree is smaller than a preset threshold value, updating the correction model, which is not described herein again.
In this embodiment, after performing identity verification for multiple times by using the dual comparison authentication method for registration of a real face image and an identity card in this embodiment, since the texture feature vector of the registered face image of the target human body is corrected in each identity verification process, and a large deviation is generated by accumulating the texture feature vector of the registered face image obtained after multiple corrections and the texture feature vector of the original registered face image, a value range of a correction parameter for correcting the texture feature vector of the registered face image in the correction model is necessarily limited. Therefore, in this embodiment, the correction parameter is a correction parameter that satisfies a preset threshold range, and the preset threshold range is (0.5, 1.5). And after updating for a certain time, if the correction parameter exceeds the preset threshold range, setting the value of the correction parameter to be the lowest threshold value 0.5 when the correction parameter is smaller than the lowest threshold value, and setting the value of the correction parameter to be the highest threshold value 1.5 when the correction parameter is larger than the highest threshold value.
Similarly, in this embodiment, the parameter set in the correction model has a number of times weight parameter set (δ 1, δ 2, δ 3 … … δ n), and when the texture feature vector (D1, D2, D3 … … Dn) of the registered face image is corrected, the generated corrected feature vector is (D1 (1+ δ 1 × α 1), D2 (1+ δ 2 × α 2), and D3 (1+ δ 3 × α 3) … … Dn (1+ δ n × α n)), where the number of times weight parameter set (δ 1, δ 2, δ 3 … … δ n) is in a range of 0.5 to 1.5, and the value of δ n gradually increases with the number of corrections. That is, the magnitude of correction of the texture feature vector of the registered face image of the target human body gradually increases as the number of corrections increases. This is because the deviation of the registered face image from the real face image is gradually increased as the number of times of day and month accumulates, and therefore, as the number of authentications increases (meaning that the time is longer from the last registration time), the magnitude of the correction needs to be gradually increased. In contrast, for the initial authentication after registration, the correction magnitude should be reduced to avoid that the texture feature vector of the registered face image of the target person after the initial correction for several times deviates too much from the texture feature vector of the original registered face image.
Fig. 3 is a structural diagram of a real face image and identification card registration double-comparison authentication system according to a third embodiment of the present application. As can be seen from the figure, the system for the double comparison and authentication of the real face image and the identity card registration of the present embodiment includes:
an image acquisition module 301 for acquiring a current face image and a registered face image of a target human body. Specifically, the image acquiring module 301 may acquire a current facial image of the target human body through an image acquiring device, such as a camera, and may also extract a registered facial image from an identification card or remotely retrieve the registered facial image of the target human body from a public security database by using an identification card number.
A texture feature vector generating module 302, configured to perform feature extraction on the current face image and the registered face image, respectively, and generate corresponding texture feature vectors.
A texture feature vector correction module 303, configured to correct the texture feature vector of the registered face image by using a preset correction model, and generate a corrected feature vector.
A difference determining module 304, configured to calculate a difference between the texture feature vector of the current face image and the modified feature vector, and determine whether a value of the difference is smaller than a preset threshold.
A result output module 305 for determining that the current face image and the registered face image match when the value of the degree of dissimilarity is less than a preset threshold.
The implementation can achieve similar technical effects to those of the above method embodiments, and is not described herein again.
Fig. 4 is a schematic diagram illustrating identity verification performed by using a system for comparing and authenticating a real face image with an identity card registration in accordance with a fourth embodiment of the present invention. As can be seen from the figure, when the identity verification is performed by using the dual comparison authentication system for registering a real face image and an identity card according to the embodiment of the present application, first, the image acquisition module acquires a registered face image and a current face image of a target human body, where the registered face image is a face image corresponding to a face image on the identity card of the target human body, and the image acquisition module may directly extract the registered face image from the identity card of the target human body, or remotely call up the registered face image from the public security database according to the identity card number of the target human body. The image acquisition module may acquire a current face image of the target human body using an image acquisition device (e.g., a camera), after acquiring a registered face image and a current face image of the target human body, send the registered face image and the current face image of the target human body to a texture feature vector generation module, the texture feature vector generation module generates a texture feature vector of the registered face image and a texture feature vector of the current face image from the registered face image and the current face image of the target human body, directly sends the texture feature vector of the current face image to a disparity determination module, sends the texture feature vector of the registered face image to a texture feature vector correction module, corrects the texture feature vector of the registered face image by the texture feature vector correction module to generate a corrected feature vector, and sends the corrected feature vector to the disparity determination module, specifically, the generated texture feature vector of the current face image may be (R1, R2, R3 … … Rn), the generated texture feature vector of the registered face image is (D1, D2, D3 … … Dn), the generated modified feature vector is (D1 × (1+ α 1), D2 × (1+ α 2), D3 × (1+ α 3) … … Dn (1+ α n)), and then the disparity determining module determines the disparity between the texture feature vector and the modified feature vector of the current face image according to the following formula:
Figure BDA0001704583720000101
and I is the difference degree between the texture feature vector of the current face image and the corrected feature vector. When the difference degree determining module calculates the difference degree I between the texture feature vector of the current face image and the corrected feature vector, the difference degree I is sent to a result output module, a threshold value range of the difference degree I can be stored in the result output module, the result output module matches the difference degree I with the threshold value range, whether the value of the difference degree is smaller than a preset threshold value or not is judged, and when the value of the difference degree is smaller than the preset threshold value, the current face image is matched with the registered face image, namely the identity inspection of the target human body is passed.
In summary, according to the authentication method and system for dual comparison between the registration of the real facial image and the identity card provided by the embodiment of the application, the registered facial image is corrected by the correction model at each verification, then the registered facial image is matched and verified with the current facial image, and the correction model is updated by using the currently acquired real facial image under the condition that the identity card is identified to be consistent in the identity verification process, so that the difference between the registered facial image and the current facial image is reduced, and the automatic facial recognition speed is increased.
The above description is only a preferred embodiment of the application and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention herein disclosed is not limited to the particular combination of features described above, but also encompasses other arrangements formed by any combination of the above features or their equivalents without departing from the spirit of the invention. For example, the above features may be replaced with (but not limited to) features having similar functions disclosed in the present application.

Claims (7)

1. A double comparison authentication method for registering a real face image and an identity card is characterized by comprising the following steps:
acquiring a current face image and a registered face image of a target human body;
respectively extracting features of the current face image and the registered face image to generate corresponding texture feature vectors;
modifying the texture feature vectors (D1, D2, D3 … … Dn) of the registered face images by using a preset modification model (α 1, α 2, α 3 … … α n) including a parameter group (α 1, α 2, α 3 … … α n) for modifying the texture feature vectors of the registered face images to generate modified feature vectors (D1 (1+ α 1), D2 (1+ α 2), D3 (1+ α 3) … … Dn (1+ α n);
the parameter set in the correction model has a number-of-times weight parameter set (δ 1, δ 2, δ 3 … … δ n), and when the texture feature vector (D1, D2, D3 … … Dn) of the registered face image is corrected, the generated corrected feature vector is (D1 × (1+ δ 1 × 1), D2 × (1+ δ 2 × 2), D3 × (1+ δ 3 × 3) … … Dn (1+ δ n × (α n)), wherein the value range of δ n in the number-of-times weight parameter set (δ 1, δ 2, δ 3 … … δ n) is 0.5-1.5, and the value of δ n gradually increases with the number of corrections;
calculating the difference degree between the texture feature vector of the current facial image and the correction feature vector, and judging whether the value of the difference degree is smaller than a preset threshold value or not;
when the value of the degree of dissimilarity is less than a preset threshold value, it is determined that the current face image and the registered face image match.
2. The method of claim 1, wherein said performing feature extraction on said current face image and said registered face image, respectively, to generate corresponding texture feature vectors, comprises:
respectively carrying out edge detection on the current face image and the registered face image, dividing the current face image and the registered face image into a plurality of regions according to the number of closed edges, carrying out texture recognition on each region, extracting a face organ characteristic value, and generating a texture characteristic vector of the current face image and a texture characteristic vector of the registered face image.
3. The method according to claim 2, wherein the performing feature extraction on the current face image and the registered face image, respectively, comprises: respectively carrying out edge detection on the current face image and the registered face image by adopting a canny edge detection operator, and extracting an image area surrounded by a closed edge, wherein the method specifically comprises the following steps:
respectively convolving the current face image and the registered face image with a Gaussian mask, and smoothing the current face image and the registered face image;
calculating the gradient of each pixel point of the smoothed current face image and the smoothed registered face image by using a Sobel operator;
reserving a maximum value of the gradient intensity on each pixel point of the current face image and the registered face image, and deleting other values;
setting a threshold upper bound and a threshold lower bound of the maximum value of the gradient intensity on each pixel point of the current facial image and the registered facial image, confirming the pixel points of which the maximum values of the gradient intensity are larger than the threshold upper bound as boundaries, confirming the pixel points of which the maximum values of the gradient intensity are larger than the threshold lower bound and smaller than the threshold upper bound as weak boundaries, and confirming the pixel points of which the maximum values of the gradient intensity are smaller than the threshold lower bound as non-boundaries;
and confirming the weak boundary connected with the boundary, and confirming other weak boundaries as non-boundaries.
4. The method according to claim 3, wherein the following formula is adopted to calculate the difference between the texture feature vector of the current facial image and the corrected feature vector, and whether the value of the difference is smaller than a preset threshold value is judged:
Figure FDA0003530172290000021
the texture feature vector of the current face image is (R1, R2, R3 … … Rn), and I is the degree of difference.
5. The method of claim 4, further comprising updating the modified model when the value of the difference is less than a preset threshold value using the following formula:
Figure FDA0003530172290000022
wherein α i t Represents the updated correction parameter of the current round, α i t-1 And i represents a correction parameter before the current round of updating, and the value range of i is 1 to n.
6. The method of claim 5, wherein the modification parameter is a modification parameter that satisfies a preset threshold range, and the preset threshold range is (0.5, 1.5).
7. A kind of true face picture and identity card register the double comparison authentication system, characterized by that, comprising:
an image acquisition module for acquiring a current face image and a registered face image of a target human body;
the texture feature vector generation module is used for respectively extracting features of the current face image and the registered face image and generating corresponding texture feature vectors;
a texture feature vector correction module for correcting the texture feature vectors (D1, D2, D3 … … Dn) of the registered face images by using a preset correction model (α 1, α 2, α 3 … … α n) to generate corrected feature vectors (D1 × (1+ α 1), D2 × (1+ α 2), D3 × (1+ α 3) … … Dn (1+ α n));
the parameter group in the correction model has a number of times weight parameter group (delta 1, delta 2, delta 3 … … delta n), when the texture feature vector (D1, D2, D3 … … Dn) of the registered face image is corrected, the generated correction feature vector is (D1 (1+ delta 1 alpha 1), D2 (1+ delta 2 alpha 2), D3 (1+ delta 3 alpha 3) … … Dn (1+ delta n alpha n)), wherein the value range of delta n in the number of times weight parameter group (delta 1, delta 2, delta 3 … … delta n) is 0.5-1.5, and the value of delta n is gradually increased along with the correction number of times;
the difference degree determining module is used for calculating the difference degree between the texture feature vector of the current facial image and the correction feature vector and judging whether the value of the difference degree is smaller than a preset threshold value or not;
a result output module for determining that the current face image and the registered face image match when the value of the degree of dissimilarity is less than a preset threshold.
CN201810649673.8A 2018-06-22 2018-06-22 Method and system for double comparison and authentication of real face image and identity card registration Active CN108875646B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810649673.8A CN108875646B (en) 2018-06-22 2018-06-22 Method and system for double comparison and authentication of real face image and identity card registration

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810649673.8A CN108875646B (en) 2018-06-22 2018-06-22 Method and system for double comparison and authentication of real face image and identity card registration

Publications (2)

Publication Number Publication Date
CN108875646A CN108875646A (en) 2018-11-23
CN108875646B true CN108875646B (en) 2022-09-27

Family

ID=64340367

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810649673.8A Active CN108875646B (en) 2018-06-22 2018-06-22 Method and system for double comparison and authentication of real face image and identity card registration

Country Status (1)

Country Link
CN (1) CN108875646B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110084098A (en) * 2019-03-14 2019-08-02 杭州笔声智能科技有限公司 A kind of paper corrects method and system

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104021397A (en) * 2014-06-13 2014-09-03 中国民航信息网络股份有限公司 Face identifying and comparing method and device
CN105139007A (en) * 2015-09-30 2015-12-09 小米科技有限责任公司 Positioning method and apparatus of face feature point
CN105956578A (en) * 2016-05-23 2016-09-21 深圳华中科技大学研究院 Face verification method based on identity document information
CN106203294A (en) * 2016-06-30 2016-12-07 广东微模式软件股份有限公司 The testimony of a witness unification auth method analyzed based on face character
CN106973192A (en) * 2015-10-13 2017-07-21 柯尼卡美能达株式会社 Image processing apparatus and image processing method
CN107358174A (en) * 2017-06-23 2017-11-17 浙江大学 A kind of hand-held authentication idses system based on image procossing
CN107944395A (en) * 2017-11-27 2018-04-20 浙江大学 A kind of method and system based on neutral net verification testimony of a witness unification

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004118627A (en) * 2002-09-27 2004-04-15 Toshiba Corp Figure identification device and method

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104021397A (en) * 2014-06-13 2014-09-03 中国民航信息网络股份有限公司 Face identifying and comparing method and device
CN105139007A (en) * 2015-09-30 2015-12-09 小米科技有限责任公司 Positioning method and apparatus of face feature point
CN106973192A (en) * 2015-10-13 2017-07-21 柯尼卡美能达株式会社 Image processing apparatus and image processing method
CN105956578A (en) * 2016-05-23 2016-09-21 深圳华中科技大学研究院 Face verification method based on identity document information
CN106203294A (en) * 2016-06-30 2016-12-07 广东微模式软件股份有限公司 The testimony of a witness unification auth method analyzed based on face character
CN107358174A (en) * 2017-06-23 2017-11-17 浙江大学 A kind of hand-held authentication idses system based on image procossing
CN107944395A (en) * 2017-11-27 2018-04-20 浙江大学 A kind of method and system based on neutral net verification testimony of a witness unification

Also Published As

Publication number Publication date
CN108875646A (en) 2018-11-23

Similar Documents

Publication Publication Date Title
KR102455633B1 (en) Liveness test method and apparatus
CN105956578B (en) A kind of face verification method of identity-based certificate information
US11023757B2 (en) Method and apparatus with liveness verification
US10262190B2 (en) Method, system, and computer program product for recognizing face
US11682232B2 (en) Device and method with image matching
US20170154202A1 (en) Feature extraction and matching for biometric authentication
CN108629262B (en) Iris identification method and corresponding device
CN112487921B (en) Face image preprocessing method and system for living body detection
CN110503760B (en) Access control method and access control system
EP2858007A1 (en) Sift feature bag based bovine iris image recognition method
US11651624B2 (en) Iris authentication device, iris authentication method, and recording medium
KR101700818B1 (en) Method and apparatus for estimating age or gender using face image
KR20180109664A (en) Liveness test method and apparatus
WO2017000493A1 (en) Live iris detection method and terminal
CN110532746B (en) Face checking method, device, server and readable storage medium
JP5698418B2 (en) Identification by iris recognition
CN114270417A (en) Face recognition system and method capable of updating registered face template
CN104318216B (en) Across the identification matching process of blind area pedestrian target in video monitoring
CN108875646B (en) Method and system for double comparison and authentication of real face image and identity card registration
US20200057910A1 (en) Information processing apparatus, verification method, and computer-readable recording medium recording verification program
CN105335720A (en) Iris information acquisition method and acquisition system
JP5285401B2 (en) Face recognition system
Ong et al. Retina verification using a combined points and edges approach
US20230103555A1 (en) Information processing apparatus, information processing method, and program
KR101441106B1 (en) Method for extracting and verifying face and apparatus thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20220907

Address after: 266000 room 230, science and Technology Museum, EMU Town, west end of Chunyang Road, Jihongtan street, Chengyang District, Qingdao, Shandong Province

Applicant after: QINGDAO CIVIL AVIATION CARES Co.,Ltd.

Address before: Room b8406-03, 4th floor, 818 Huayuan Road, Xiangcheng District, Suzhou City, Jiangsu Province

Applicant before: SUZHOU QIXIAN INTELLIGENT TECHNOLOGY Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant